英文文献及翻译:图像处理操作的层次结构
图像处理外文翻译 (2)
附录一英文原文Illustrator software and Photoshop software difference Photoshop and Illustrator is by Adobe product of our company, but as everyone more familiar Photoshop software, set scanning images, editing modification, image production, advertising creative, image input and output in one of the image processing software, favored by the vast number of graphic design personnel and computer art lovers alike.Photoshop expertise in image processing, and not graphics creation. Its application field, also very extensive, images, graphics, text, video, publishing various aspects have involved. Look from the function, Photoshop can be divided into image editing, image synthesis, school tonal color and special effects production parts. Image editing is image processing based on the image, can do all kinds of transform such as amplifier, reducing, rotation, lean, mirror, clairvoyant, etc. Also can copy, remove stain, repair damaged image, to modify etc. This in wedding photography, portrait processing production is very useful, and remove the part of the portrait, not satisfied with beautification processing, get let a person very satisfactory results.Image synthesis is will a few image through layer operation, tools application of intact, transmit definite synthesis of meaning images, which is a sure way of fine arts design. Photoshop provide drawing tools let foreign image and creative good fusion, the synthesis of possible make the image is perfect.School colour in photoshop with power is one of the functions of deep, the image can be quickly on the color rendition, color slants adjustment and correction, also can be in different colors to switch to meet in different areas such as web image design, printing and multimedia application.Special effects production in photoshop mainly by filter, passage of comprehensive application tools and finish. Including image effects of creative and special effects words such as paintings, making relief, gypsum paintings, drawings, etc commonly used traditional arts skills can be completed by photoshop effects. And all sorts of effects of production aremany words of fine arts designers keen on photoshop reason to study.Users in the use of Photoshop color function, will meet several different color mode: RGB, CMY K, HSB and Lab. RGB and CMYK color mode will let users always remember natural color, users of color and monitors on the printed page color is a totally different approach to create. The monitor is by sending red, green, blue three beams to create color: it is using RGB (red/green/blue) color mode. In order to make a complex color photographs on a continuous colour and lustre effect, printing technology used a cyan, the red, yellow and black ink presentation combinations from and things, reflect or absorb all kinds of light wavelengths. Through overprint) this print (add four color and create color is CMYK (green/magenta/yellow/black) yan color part of a pattern. HSB (colour and lustre/saturation/brightness) color model is based on the way human feelings, so the color will be natural color for customer computer translation of the color create provides an intuitive methods. The Lab color mode provides a create "don't rely on equipment" color method, this also is, no matter use what monitors.Photoshop expertise in image processing, and not graphics creation. It is necessary to distinguish between the two concepts. Image processing of the existing bitmap image processing and use edit some special effects, the key lies in the image processing processing; Graphic creation software is according to their own idea originality, using vector graphics to design graphics, this kind of software main have another famous company Adobe Illustrator and Macromedia company software Freehand.As the world's most famous Adobe Illustrator, feat graphics software is created, not graphic image processing. Adobe Illustrator is published, multimedia and online image industry standard vector illustration software. Whether production printing line draft of the designers and professional Illustrator, production multimedia image of artists, or Internet page or online content producers Illustrator, will find is not only an art products tools. This software for your line of draft to provide unprecedented precision and control, is suitable for the production of any small design to large complex projects.Adobe Illustrator with its powerful function and considerate user interface has occupied most of the global vector editing software share. With incomplete statistics global 37% of stylist is in use Adobe Illustrator art design. Especially the patent PostScript Adobe companybased on the use of technology, has been fully occupied professional Illustrator printed fields. Whether you're line art designers and professional Illustrator, production multimedia image of artists, or Internet page or online content producers, had used after Illustrator, its formidable will find the function and concise interface design style only Freehand to compare. (Macromedia Freehand is launched vector graphics software company, following the Macromedia company after the merger by Adobe Illustrator and will decide to continue the development of the software have been withdrawn from market).Adobe company in 1987 when they launched the Illustrator1.1 version. In the following year, and well platform launched 2.0 version. Illustrator really started in 1988, should say is introduced on the Mac Illustrator 88 version. A year after the upgrade to on the Mac version3.0 in 1991, and spread to Unix platforms. First appeared on the platform in the PC version4.0 version of 1992, this version is also the earliest Japanese transplant version. And in the MAC is used most is5.0/5.5 version, because this version used Dan Clark's do alias (anti-aliasing display) display engine is serrated, make originally had been in graphic display of vector graphics have a qualitative leap. At the same time on the screen making significant reform, style and Photoshop is very similar, so for the Adobe old users fairly easy to use, it is no wonder that did not last long, and soon also popular publishing industry launched Japanese. But not offering PC version. Adobe company immediately Mac and Unix platforms in launched version6.0. And by Illustrator real PC users know is introduced in 1997, while7.0 version of Mac and Windows platforms launch. Because the 7.0 version USES the complete PostScript page description language, make the page text and graphics quality got again leap. The more with her and Photoshop good interchangeability, won a good reputation. The only pity is the support of Chinese 7.0 abysmal. In 1998 the company launched landmark Adobe Illustrator8.0, making version - Illustrator became very perfect drawing software, is relying on powerful strength, Adobe company completely solved of Chinese characters and Japanese language support such double byte, more increased powerful "grid transition" tool (there are corresponding Draw9.0 Corel, but the effect the function of poor), text editing tools etc function, causes its fully occupy the professional vector graphics software's supremacy.Adobe Illustrator biggest characteristics is the use of beisaier curve, make simpleoperation powerful vector graphics possible. Now it has integrated functions such as word processing, coloring, not only in illustrations production, in printing products (such as advertising leaflet, booklet) design manufacture aspect is also widely used, in fact has become desktop publishing or (DTP) industry default standard. Its main competitors are in 2005, but MacromediaFreehand Macromedia had been Adobe company mergers.So-called beisaier curve method, in this software is through "the pen tool" set "anchor point" and "direction line" to realize. The average user in the beginning when use all feel not accustomed to, and requires some practice, but once the master later can follow one's inclinations map out all sorts of line, and intuitive and reliable.It also as Creative Suite of software suit with important constituent, and brother software - bitmap graphics software Photoshop have similar interface, and can share some plug-ins and function, realize seamless connection. At the same time it also can put the files output for Flash format. Therefore, can pass Illustrator let Adobe products and Flash connection.Adobe Illustrator CS5 on May 17, 2010 issue. New Adobe Illustrator CS5 software can realize accurate in perspective drawing, create width variable stroke, use lifelike, make full use of paint brush with new Adobe CS Live online service integration. AI CS5 has full control of the width zoom along path variable, and stroke, arrows, dashing and artistic brushes. Without access to multiple tools and panel, can directly on the sketchpad merger, editing and filling shape. AI CS5 can handle a file of most 100 different size, and according to your sketchpad will organize and check them.Here in Adobe Illustrator CS5, for example, briefly introduce the basic function: Adobe IllustratorQuick background layerWhen using Illustrator after making good design, stored in Photoshop opens, if often pattern is in a transparent layer, and have no background ground floor. Want to produce background bottom, are generally add a layer, and then executed merge down or flatten, with background ground floor. We are now introducing you a quick method: as long as in diagram level on press the upper right version, choose new layer, the arrow in the model selection and bottom ", "background can quickly produce. However, in Photoshop 5 after the movementmerged into one instruction, select menu on the "new layer is incomplete incomplete background bottom" to finish.Remove overmuch type clothWhen you open the file, version 5 will introduce the Illustrator before Illustrator version created files disused zone not need. In order to remove these don't need in the zone, click on All Swatches palette Swatches icon and then Select the Select clause in the popup menu, and Trash Unused. Click on the icon to remove irrelevant type cloth. Sometimes you must repeat selection and delete processes to ensure that clear palette. Note that complex documents will take a relatively long time doing cleanup.Put the fabric to define the general-screeningIn Illustrator5 secondary color and process color has two distinct advantages compared to establish for easy: they provide HuaGan tonal; And when you edit the general-screening prescription, be filled some of special color objects will be automatically updated into to the new color. Because process color won't let you build tonal and provides automatic updates, you may want to put all the fabric is defined as the general-screening. But to confirm Illustrator, when you are in QuarkXPress or when PageMaker quaclrochramatic must keep their into process of color.Preferred using CMYKBecause of Illustrator7 can let you to CMYK, RGB and HSB (hue, saturation, bright) color mode, so you want to establish color the creation of carefully, you can now contains the draft with the combination of these modes created objects. When you do, they may have output various kinds of unexpected things will happen. Printing output file should use CMYK; Only if you don't use screen display manuscript RGB. If your creation draft will also be used for printing and screen display, firstly with CMYK create printing output file, then use to copy it brings As ordered the copy and modify to the appropriate color mode.Information source:" Baidu encyclopedia "附录二中文译文Illustrator软件与Photoshop软件的区别Photoshop与Illustrator都是由Adobe公司出品的,而作为大家都比较熟悉的Photoshop软件,集图像扫描、编辑修改、图像制作、广告创意,图像输入与输出于一体的图形图像处理软件,深受广大平面设计人员和电脑美术爱好者的喜爱。
图像处理-毕设论文外文翻译(翻译+原文)
英文资料翻译Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up forthe edge, and then analyzing and processing until the probably area of license plate is extracted.The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast of the image may require improvement.Commonly,too,coordinate transformations are needed to restore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image(or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to alabel image.Now that we know the exact geometrical shape of the object,we can extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now be treated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the same knowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft Voyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the public domain.Classical image processing techniques are routinely employed bygraphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.中文翻译图像处理不是一步就能完成的过程。
图像处理类相关英文文献
Journal of VLSI Signal Processing39,295–311,2005c 2005Springer Science+Business Media,Inc.Manufactured in The Netherlands.Parallel-Beam Backprojection:An FPGA Implementation Optimizedfor Medical ImagingMIRIAM LEESER,SRDJAN CORIC,ERIC MILLER AND HAIQIAN YU Department of Electrical and Computer Engineering,Northeastern University,Boston,MA02115,USAMARC TREPANIERMercury Computer Systems,Inc.,Chelmsford,MA01824,USAReceived September2,2003;Revised March23,2004;Accepted May7,2004Abstract.Medical image processing in general and computerized tomography(CT)in particular can benefit greatly from hardware acceleration.This application domain is marked by computationally intensive algorithms requiring the rapid processing of large amounts of data.To date,reconfigurable hardware has not been applied to the important area of image reconstruction.For efficient implementation and maximum speedup,fixed-point implementations are required.The associated quantization errors must be carefully balanced against the requirements of the medical community.Specifically,care must be taken so that very little error is introduced compared tofloating-point implementations and the visual quality of the images is not compromised.In this paper,we present an FPGA implementation of the parallel-beam backprojection algorithm used in CT for which all of these requirements are met.We explore a number of quantization issues arising in backprojection and concentrate on minimizing error while maximizing efficiency.Our implementation shows approximately100times speedup over software versions of the same algorithm running on a1GHz Pentium,and is moreflexible than an ASIC implementation.Our FPGA implementation can easily be adapted to both medical sensors with different dynamic ranges as well as tomographic scanners employed in a wider range of application areas including nondestructive evaluation and baggage inspection in airport terminals.Keywords:backprojection,medical imaging,tomography,FPGA,fixed point arithmetic1.IntroductionReconfigurable hardware offers significant potentialfor the efficient implementation of a wide range ofcomputationally intensive signal and image process-ing algorithms.The advantages of utilizing Field Pro-grammable Gate Arrays(FPGAs)instead of DSPsinclude reductions in the size,weight,performanceand power required to implement the computationalplatform.FPGA implementations are also preferredover ASIC implementations because FPGAs have moreflexibility and lower cost.To date,the full utility ofthis class of hardware has gone largely unexploredand unexploited for many mainstream applications.In this paper,we consider a detailed implementa-tion and comprehensive analysis of one of the mostfundamental tomographic image reconstruction steps,backprojection,on reconfigurable hardware.While weconcentrate our analysis on issues arising in the useof backprojection for medical imaging applications,both the implementation and the analysis we providecan be applied directly or easily extended to a widerange of otherfields where this task needs to be per-formed.This includes remote sensing and surveillanceusing synthetic aperture radar and non-destructiveevaluation.296Leeser et al.Tomography refers to the process that generates a cross-sectional or volumetric image of an object from a series of projections collected by scanning the ob-ject from many different directions[1].Projection data acquisition can utilize X-rays,magnetic resonance,ra-dioisotopes,or ultrasound.The discussion presented here pertains to the case of two-dimensional X-ray ab-sorption tomography.In this type of tomography,pro-jections are obtained by a number of sensors that mea-sure the intensity of X-rays travelling through a slice of the scanned object.The radiation source and the sen-sor array rotate around the object in small increments. One projection is taken for each rotational angle.The image reconstruction process uses these projections to calculate the average X-ray attenuation coefficient in cross-sections of a scanned slice.If different structures inside the object induce different levels of X-ray atten-uation,they are discernible in the reconstructed image. The most commonly used approach for image recon-struction from dense projection data(many projections, many samples per projection)isfiltered backprojection (FBP).Depending on the type of X-ray source,FBP comes in parallel-beam and fan-beam variations[1].In this paper,we focus on parallel-beam backprojection, but methods and results presented here can be extended to the fan-beam case with modifications.FBP is a computationally intensive process.For an image of size n×n being reconstructed with n projec-tions,the complexity of the backprojection algorithm is O(n3).Image reconstruction through backprojection is a highly parallelizable process.Such applications are good candidates for implementation in Field Pro-grammable Gate Array(FPGA)devices since they pro-videfine-grained parallelism and the ability to be cus-tomized to the needs of a particular implementation. We have implemented backprojection by making use of these principles and shown approximately100times speedup over a software implementation on a1GHz Pentium.Our architecture can easily be expanded to newer and larger FPGA devices,further accelerating image generation by extracting more data parallelism.A difficulty of implementing FBP is that producing high-resolution images with good resemblance to in-ternal characteristics of the scanned object requires that both the density of each projection and their total num-ber be large.This represents a considerable challenge for hardware implementations,which attempt to maxi-mize the parallelism in the implementation.Therefore, it can be beneficial to usefixed-point implementations and to optimize the bit-width of a projection sample to the specific needs of the targeted application domain. We show this for medical imaging,which exhibits distinctive properties in terms of requiredfixed-point precision.In addition,medical imaging requires high precision reconstructions since visual quality of images must not be compromised.We have paid special attention to this requirement by carefully analyzing the effects of quan-tization on the quality of reconstructed images.We have found that afixed-point implementation with properly chosen bit-widths can give high quality reconstructions and,at the same time,make hardware implementation fast and area efficient.Our quantization analysis inves-tigates algorithm specific and also general data quanti-zation issues that pertain to input data.Algorithm spe-cific quantization deals with the precision of spatial ad-dress generation including the interpolation factor,and also investigates bit reduction of intermediate results for different rounding schemes.In this paper,we focus on both FPGA implemen-tation performance and medical image quality.In pre-vious work in the area of hardware implementations of tomographic processing algorithms,Wu[2]gives a brief overview of all major subsystems in a com-puted tomography(CT)scanner and proposes loca-tions where ASICs and FPGAs can be utilized.Ac-cording to the author,semi-custom digital ASICs were the most appropriate due to the level of sophistica-tion that FPGA technology had in1991.Agi et al.[3]present thefirst description of a hardware solu-tion for computerized tomography of which we are aware.It is a unified architecture that implements for-ward Radon transform,parallel-and fan-beam back-projection in an ASIC based multi-processor system. Our FPGA implementation focuses on backprojection. Agi et al.[4]present a similar investigation of quanti-zation effects;however their results do not demonstrate the suitability of their implementation for medical ap-plications.Although theirfiltered sinogram data are quantized with12-bit precision,extensive bit trunca-tion on functional unit outputs and low accuracy of the interpolation factor(absolute error of up to2)ren-der this implementation significantly less accurate than ours,which is based on9-bit projections and the max-imal interpolation factor absolute error of2−4.An al-ternative to using specially designed processors for the implementation offiltered backprojection(FBP)is pre-sented in[5].In this work,a fast and direct FBP al-gorithm is implemented using texture-mapping hard-ware.It can perform parallel-beam backprojection of aParallel-Beam Backprojection 297512-by-512-pixel image from 804projections in 2.1sec,while our implementation takes 0.25sec for 1024projections.Luiz et al.[6]investigated residue number systems (RNS)for the implementation of con-volution based backprojection to speedup the process-ing.Unfortunately,extra binary-to-RNS and RNS-to-binary conversions are introduced.Other approaches to accelerating the backprojection algorithm have been investigated [7,8].One approach [7]presents an order O (n 2log n )and merits further study.The suitability to medical image quality and hardware implementation of these approaches[7,8]needs to be demonstrated.There are also a lot of interests in the area of fan-beam and cone-beam reconstruction using hardware implementa-tion.An FPGA-based fan-beam reconstruction module [9]is proposed and simulated using MAX +PLUS2,version 9.1,but no actual FPGA implementation is mentioned.Moreover,the authors did not explore the potential parallelism for different projections as we do,which is essential for speed-up.More data and com-putation is needed for 3D cone-beam FBP.Yu’s PC based system [10]can reconstruct the 5123data from 288∗5122projections in 15.03min,which is not suit-able for real-time.The embedded system described in [11]can do 3D reconstruction in 38.7sec with the fastest time reported in the literature.However,itisFigure 1.(a)Illustration of the coordinate system used in parallel-beam backprojection,and (b)geometric explanation of the incremental spatial address calculation.based on a Mercury RACE ++AdapDev 1120devel-opment workstation and need many modifications for a different platform.Bins et al.[12]have investigated precision vs.error in JPEG compression.The goals of this research are very similar to ours:to implement de-signs in fixed-point in order to maximize parallelism and area utilization.However,JPEG compression is an application that can tolerate a great deal more error than medical imaging.In the next section,we present the backprojection algorithm in more detail.In Section 3we present our quantization studies and analysis of error introduced.Section 4presents the hardware implementation in de-tail.Finally we present results and discuss future di-rections.An earlier version of this research was pre-sented [13].This paper provides a fuller discussion of the project and updated results.2.Parallel-Beam Filtered BackprojectionA parallel-beam CT scanning system uses an array of equally spaced unidirectional sources of focused X-ray beams.Generated radiation not absorbed by the object’s internal structure reaches a collinear array of detectors (Fig.1(a)).Spatial variation of the absorbed298Leeser et al.energy in the two-dimensional plane through the ob-ject is expressed by the attenuation coefficient µ(x ,y ).The logarithm of the measured radiation intensity is proportional to the integral of the attenuation coef-ficient along the straight line traversed by the X-ray beam.A set of values given by all detectors in the array comprises a one-dimensional projection of the attenu-ation coefficient,P (t ,θ),where t is the detector dis-tance from the origin of the array,and θis the angle at which the measurement is taken.A collection of pro-jections for different angles over 180◦can be visualized in the form of an image in which one axis is position t and the other is angle θ.This is called a sinogram or Radon transform of the two-dimensional function µ,and it contains information needed for the reconstruc-tion of an image µ(x ,y ).The Radon transform can be formulated aslog e I 0I d= µ(x ,y )δ(x cos θ+y sin θ−t )dx dy≡P (t ,θ)(1)where I 0is the source intensity,I d is the detected inten-sity,and δ(·)is the Dirac delta function.Equation (1)is actually a line integral along the path of the X-ray beam,which is perpendicular to the t axis (see Fig.1(a))at location t =x cos θ+y sin θ.The Radon transform represents an operator that maps an image µ(x ,y )to a sinogram P (t ,θ).Its inverse mapping,the inverse Radon transform,when applied to a sinogram results in an image.The filtered backprojection (FBP)algo-rithm performs this mapping [1].FBP begins by high-pass filtering all projections be-fore they are fed to hardware using the Ram-Lak or ramp filter,whose frequency response is |f |.The dis-crete formulation of backprojection isµ(x ,y )=πK Ki =1 θi(x cos θi +y sin θi ),(2)where θ(t )is a filtered projection at angle θ,and K is the number of projections taken during CT scanning at angles θi over a 180◦range.The number of val-ues in θ(t )depends on the image size.In the case of n ×n pixel images,N =√n D detectors are re-quired.The ratio D =d /τ,where d is the distance between adjacent pixels and τis the detector spac-ing,is a critical factor for the quality of the recon-structed image and it obviously should satisfy D >1.In our implementation,we utilize values of D ≈1.4and N =1024,which are typical for real systems.Higher values do not significantly increase the image quality.Algorithmically,Eq.(2)is implemented as a triple nested “for”loop.The outermost loop is over pro-jection angle,θ.For each θ,we update every pixel in the image in raster-scan order:starting in the up-per left corner and looping first over columns,c ,and next over rows,r .Thus,from (2),the pixel at loca-tion (r ,c )is incremented by the value of θ(t )where t is a function of r and c .The issue here is that the X-ray going through the currently reconstructed pixel,in general,intersects the detector array between detec-tors.This is solved by linear interpolation.The point of intersection is calculated as an address correspond-ing to detectors numbered from 0to 1023.The frac-tional part of this address is the interpolation factor.The equation that performs linear interpolation is given byint θ(i )=[ θ(i +1)− θ(i )]·I F + θ(i ),(3)where IF denotes the interpolation factor, θ(t )is the 1024element array containing filtered projection data at angle θ,and i is the integer part of the calculated address.The interpolation can be performed before-hand in software,or it can be a part of the backpro-jection hardware itself.We implement interpolation in hardware because it substantially reduces the amount of data that must be transmitted to the reconfigurable hardware board.The key to an efficient implementation of Eq.(2)is shown in Fig.1(b).It shows how a distance d between square areas that correspond to adjacent pixels can be converted to a distance t between locations where X-ray beams that go through the centers of these areas hit the detector array.This is also derived from the equa-tion t =x cos θ+y sin θ.Assuming that pixels are pro-cessed in raster-scan fashion,then t =d cos θfor two adjacent pixels in the same row (x 2=x 1+d )and sim-ilarly t =d sin θfor two adjacent pixels in the same column (y 2=y 1−d ).Our implementation is based on pre-computing and storing these deltas in look-up tables(LUTs).Three LUTs are used corresponding to the nested “for”loop structure of the backprojection algorithm.LUT 1stores the initial address along the detector axis (i.e.along t )for a given θrequired to update the pixel at row 1,column 1.LUT 2stores the increment in t required as we increment across a row.LUT 3stores the increment for columns.Parallel-Beam Backprojection299Figure2.Major simulation steps.3.QuantizationMapping the algorithm directly to hardware will not produce an efficient implementation.Several modifica-tions must be made to obtain a good hardware realiza-tion.The most significant modification is usingfixed-point arithmetic.For hardware implementation,narrow bit widths are preferred for more parallelism which translates to higher overall processing speed.How-ever,medical imaging requires high precision which may require wider bit widths.We did extensive analy-sis to optimize this tradeoff.We quantize all data and all calculations to increase the speed and decrease the re-sources required for implementation.Determining al-lowable quantization is based on a software simulation of the tomographic process.Figure2shows the major blocks of the simulation. An input image isfirst fed to the software implementa-tion of the Radon transform,also known as reprojection [14],which generates the sinogram of1024projections and1024samples per projection.Thefiltering block convolves sinogram data with the impulse response of the rampfilter generating afiltered sinogram,which is then backprojected to give a reconstructed image.All values in the backprojection algorithm are real numbers.These can be implemented as eitherfloating-point orfixed-point values.Floating-point represen-tation gives increased dynamic range,but is signifi-cantly more expensive to implement in reconfigurable hardware,both in terms of area and speed.For these reasons we have chosen to usefixed-point arithmetic. An important issue,especially in medical imaging,is how much numerical accuracy is sacrificed whenfixed-point values are used.Here,we present the methods used tofind appropriate bit-widths for maintaining suf-ficient numerical accuracy.In addition,we investigate possibilities for bit reduction on the outputs of certain functional units in the datapath for different rounding schemes,and what influence that has on the error intro-duced in reconstructed images.Our analysis shows that medical images display distinctive properties with re-spect to how different quantization choices affect their reconstruction.We exploit this and customize quan-tization to bestfit medical images.We compute the quantization error by comparing afixed-point image reconstruction with afloating-point one.Fixed-point variables in our design use a general slope/bias-encoding,meaning that they are represented asV≈V a=SQ+B,(4) where V is an arbitrary real number,V a is itsfixed-point approximation,Q is an integer that encodes V,S is the slope,and B is the bias.Fixed-point versions of the sinogram and thefiltered sinogram use slope/bias scaling where the slope and bias are calculated to give maximal precision.The quantization of these two vari-ables is calculated as:S=max(V)−min(V)max(Q)−min(Q)=max(V)−min(V)2,(5) B=max(V)−S·max(Q)orB=min(V)−S·min(Q),(6) Q=roundV−BS,(7)where ws is the word size in bits of integer Q.Here, max(V)and min(V)are the maximum and mini-mum values that V will take,respectively.max(V) was determined based on analysis of data.Since sinogram data are unsigned numbers,in this case min(V)=min(Q)=B=0.The interpolation factor is an unsigned fractional number and uses radix point-only scaling.Thus,the quantized interpolation factor is calculated as in Eq.(7),with saturation on overflow, with S=2−E where E is the number of fractional bits, and with B=0.For a given sinogram,S and B are constants and they do not show up in the hardware—only the quan-tized value Q is part of the hardware implementation. Note that in Eq.(3),two data samples are subtracted from each other before multiplication with the inter-polation factor takes place.Thus,in general,the bias B is eliminated from the multiplication,which makes quantization offiltered sinogram data with maximal precision scaling easily implementable in hardware.300Leeser etal.Figure 3.Some of the images used as inputs to the simulation process.The next important issue is the metric used for evalu-ating of the error introduced by quantization.Our goal was to find a metric that would accurately describe vi-sual differences between compared images regardless of their dynamic range.If 8-bit and 16-bit versions of a single image are reconstructed so that there is no vis-ible difference between the original and reconstructed images,the proper metric should give a comparable estimate of the error for both bit-widths.The proper metric should also be insensitive to the shift of pixel value range that can emerge for different quantization and rounding schemes.Absolute values of single pix-els do not effect visual image quality as long as their relative value is preserved,because pixel values are mapped to a set of grayscale values.The error metric we use that meets these criteria is the Relative Error (RE):RE = M i =1 (x i −¯x )− y F P i−¯y F P 2M i =1 y F P i−¯y F P ,(8)Here,M is the total number of pixels,x i and y F Pi are the values of the i -th pixel in the quantized and floating-point reconstructions respectively,and ¯x,¯y FP are their means.The mean value is sub-tracted because we only care about the relative pixel values.Figure 3shows some characteristic images from a larger set of 512-by-512-pixel images used as inputs to the simulation process.All images are monochrome 8-bit images,but 16-bit versions are also used in simu-lations.Each image was chosen for a certain reason.For example,the Shepp-Logan phantom is well known and widely used in testing the ability of algorithms to accu-rately reconstruct cross sections of the human head.It is believed that cross-sectional images of the human head are the most sensitive to numerical inaccuracies and the presence of artifacts induced by a reconstruction algo-rithm [1].Other medical images were Female,Head,and Heart obtained from the visible human web site [15].The Random image (a white noise image)should result in the upper bound on bit-widths required for a precise reconstruction.The Artificial image is unique because it contains all values in the 8-bit grayscale range.This image also contains straight edges of rect-angles,which induce more artifacts in the reconstructed image.This is also characteristic of the Head image,which contains a rectangular border around the head slice.Figure 4shows the detailed flowchart of the simu-lated CT process.In addition to the major blocks des-ignated as Reproject,Filter and Backproject,Fig.4also includes the different quantization steps that we have investigated.Each path in this flowchart rep-resents a separate simulation cycle.Cycle 1gives aParallel-Beam Backprojection301Figure 4.Detailed flowchart of the simulation process.floating-point (FP)reconstruction of an input image.All other cycles perform one or more type of quan-tization and their resulting images are compared to the corresponding FP reconstruction by computing the Relative Error.The first quantization step converts FP projection data obtained by the reprojection step to a fixed-point representation.Simulation cycle 2is used to determine how different bit-widths for quantized sino-gram data affect the quality of a reconstructed image.Our research was based on a prototype system that used 12-bit accurate detectors for the acquisition of sino-gram data.Simulations showed that this bit-width is a good choice since worst case introduced error amounts to 0.001%.The second quantization step performsthe Figure 5.Simulation results for the quantization of filtered sinogram data.conversion of filtered sinogram data from FP to fixed-point representation.Simulation cycle 3is used to find the appropriate bit-width of the words representing a filtered sinogram.Figure 5shows the results for this cycle.Since we use linear interpolation of projection values corresponding to adjacent detectors,the interpo-lation factor in Eq.(3)also has to be quantized.Figure 6summarizes results obtained from simulation cycle 4,which is used to evaluate the error induced by this quantization.Figures 5and 6show the Relative Error metric for different word length values and for different simula-tion cycles for a number of input images.Some input images were used in both 8-bit and 16-bit versions.302Leeser etal.Figure 6.Simulation results for the quantization of the interpolation factor.Figure 5corresponds to the quantization of filtered sinogram data (path 3in Fig.4).The conclusion here is that 9-bit quantization is the best choice since it gives considerably smaller error than 8-bit quantiza-tion,which for some images induces visible artifacts.At the same time,10-bit quantization does not give vis-ible improvement.The exceptions are images 2and 3,which require 13bits.From Fig.6(path 4in Fig.4),we conclude that 3bits for the interpolation factor (mean-ing the maximum error for the spatial address is 2−4)Figure 7.Relative error between fixed-point and floating-point reconstruction.is sufficiently accurate.As expected,image 1is more sensitive to the precision of the linear interpolation be-cause of its randomness.Figure 7shows that combining these quantization schemes results in a very small error for image “Head”in Fig.3.We also investigated whether it is feasible to discard some of the least significant bits (LSBs)on outputs of functional units (FUs)in the datapath and still not introduce any visible artifacts.The goal is for the re-constructed pixel values to have the smallest possibleParallel-Beam Backprojection 303bit-widths.This is based on the intuition that bit re-duction done further down the datapath will introduce a smaller amount of error in the result.If the same bit-width were obtained by simply quantizing filtered projection data with fewer bits,the error would be mag-nified by the operations performed in the datapath,es-pecially by the multiplication.Path number 5in Fig.4depicts the simulation cycles that investigates bit reduc-tion at the outputs of three of the FUs.These FUs imple-ment subtraction,multiplication and addition that are all part of the linear interpolation from Eq.(3).When some LSBs are discarded,the remaining part of a binary word can be rounded in different ways.We investigate two different rounding schemes,specifically rounding to nearest and truncation (or rounding to floor).Round-ing to nearest is expected to introduce the smallest er-ror,but requires additional logic resources.Truncation has no resource requirements,but introduces a nega-tive shift of values representing reconstructed pixels.Bit reduction effectively optimizes bit-widths of FUs that are downstream in the data flow.Figure 8shows tradeoffs of bit reduction and the two rounding schemes after multiplication for medi-cal images.It should be noted that sinogram data are quantized to 12bits,filtered sinogram to 9bits,and the interpolation factor is quantized to 3bits (2−4pre-cision).Similar studies were done for the subtraction and addition operations and on a broader set of im-ages.It was determined that medical images suffer the least amount of error introduced by combining quanti-zations and bit reduction.For medical images,in case of rounding to nearest,there is very little difference inthe Figure 8.Bit reduction on the output of the interpolation multiplier.introduced error between 1and 3discarded bits after multiplication and addition.This difference is higher in the case of bit reduction after addition because the multiplication that follows magnifies the error.For all three FUs,when only medical images are considered,there is a fixed relationship between rounding to near-est and truncation.Two least-significant bits discarded with rounding to nearest introduce an error that is lower than or close to the error of 1bit discarded with trun-cation.Although rounding to nearest requires logic re-sources,even when only one LSB is discarded with rounding to nearest after each of three FUs,the overall resource consumption is reduced because of savings provided by smaller FUs and pipeline registers (see Figs.11and 12).Figure 9shows that discarding LSBs introduces additional error on medical images for this combination of quantizations.In our case there was no need for using bit reduction to achieve smaller resource consumption because the targeted FPGA chip (Xilinx Virtex1000)provided sufficient logic resources.There is one more quantization issue we considered.It pertains to data needed for the generation of the ad-dress into a projection array (spatial address addr )and to the interpolation factor.As described in the intro-duction,there are three different sets of data stored in look-up tables (LUTs)that can be quantized.Since pixels are being processed in raster-scan order,the spa-tial address addr is generated by accumulating entries from LUTs 2and 3to the corresponding entry in LUT 1.The 10-bit integer part of the address addr is the index into the projection array θ(·),while its fractional part is the interpolation factor.By using radix point-only。
图像科学综述 外文文献 外文翻译 英文文献
附录图像科学综述近几年来,图像处理与识别技术得到了迅速的发展,现在人们己充分认识到图像处理和识别技术是认识世界、改造世界的重要手段。
目前它己应用于许多领域,成为2l世纪信息时代的一门重要的高新科学技术。
1.图像处理与识别技术概述图像就是用各种观测系统以不同形式和手段观测客观世界而获得的,可以直接或间接作用于人眼而产生视知觉的实体。
科学研究和统计表明,人类从外界获得的信息约有75%来自于视觉系统,也就是说,人类的大部分信息都是从图像中获得的。
图像处理是人类视觉延伸的重要手段,可以便人们看到任意波长上所测得的图像。
例如,借助伽马相机、x光机,人们可以看到红外和超声图像:借助CT可看到物体内部的断层图像;借助相应工具可看到立体图像和剖视图像。
1964年,美国在太空探索中拍回了大量月球照片,但是由于种种环境因素的影响,这些照片是非常不清晰的,为此,美国喷射推进实验室(JPL)使用计算机对图像进行处理,使照片中的重要信息得以清晰再现。
这是这门技术发展的重要里程碑。
此后,图像处理技术在空间研究方面得到广泛的应用。
总体来说,图像处理技术的发展大致经历了初创期、发展期、普及期和实用化期4个阶段。
初创期开始于20世纪60年代,当时的图像采用像素型光栅进行扫描显示,大多采用巾、大型机对其进行处理。
在这一时期,由于图像存储成本高,处理设备造价高,因而其应用面很窄。
20世纪70年代进入了发展期,开始大量采用中、小型机进行处理,图像处理也逐渐改用光栅扫描显示方式,特别是出现了CT和卫星遥感图像,对图像处理技术的发展起到了很好的促进作用。
到了20世纪80年代,图像处理技术进入普及期,此时购微机已经能够担当起图形图像处理的任务。
VLSL的出现更使得处理速度大大提高,其造价也进一步降低,极大地促进了图形图像系统的普及和应用。
20世纪90年代是图像技术的实用化时期,图像处理的信息量巨大,对处理速度的要求极高。
21世纪的图像技术要向高质量化方面发展,主要体现在以下几点:①高分辨率、高速度,图像处理技术发展的最终目标是要实现图像的实时处理,这在移动目标的生成、识别和跟踪上有着重要意义:②立体化,立体化所包括的信息最为完整和丰富,数字全息技术将有利于达到这个目的;②智能化,其目的是实现图像的智能生成、处理、识别和理解。
图像处理外文翻译
附录A3 Image Enhancement in the Spatial DomainThe principal objective of enhancement is to process an image so that the result is more suitable than the original image for a specific application. The word specific is important, because it establishes at the outset than the techniques discussed in this chapter are very much problem oriented. Thus, for example, a method that is quite useful for enhancing X-ray images may not necessarily be the best approach for enhancing pictures of Mars transmitted by a space probe. Regardless of the method used .However, image enhancement is one of the most interesting and visually appealing areas of image processing.Image enhancement approaches fall into two broad categories: spatial domain methods and frequency domain methods. The term spatial domain refers to the image plane itself, and approaches in this category are based on direct manipulation of pixels in an image. Fourier transform of an image. Spatial methods are covered in this chapter, and frequency domain enhancement is discussed in Chapter 4.Enhancement techniques based on various combinations of methods from these two categories are not unusual. We note also that many of the fundamental techniques introduced in this chapter in the context of enhancement are used in subsequent chapters for a variety of other image processing applications.There is no general theory of image enhancement. When an image is processed for visual interpretation, the viewer is the ultimate judge of how well a particular method works. Visual evaluation of image quality is a highly is highly subjective process, thus making the definition of a “good image” an elusive standard by which to compare algorithm performance. When the problem is one of processing images for machine perception, the evaluation task is somewhat easier. For example, in dealing with a character recognition application, and leaving aside other issues such as computational requirements, the best image processing method would be the one yielding the best machine recognition results. However, even in situations when aclear-cut criterion of performance can be imposed on the problem, a certain amount of trial and error usually is required before a particular image enhancement approach is selected.3.1 BackgroundAs indicated previously, the term spatial domain refers to the aggregate of pixels composing an image. Spatial domain methods are procedures that operate directly on these pixels. Spatial domain processes will be denotes by the expression()[]=(3.1-1)g x y T f x y,(,)where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f, defined over some neighborhood of (x, y). In addition, T can operate on a set of input images, such as performing the pixel-by-pixel sum of K images for noise reduction, as discussed in Section 3.4.2.The principal approach in defining a neighborhood about a point (x, y) is to use a square or rectangular subimage area centered at (x, y).The center of the subimage is moved from pixel to starting, say, at the top left corner. The operator T is applied at each location (x, y) to yield the output, g, at that location. The process utilizes only the pixels in the area of the image spanned by the neighborhood. Although other neighborhood shapes, such as approximations to a circle, sometimes are used, square and rectangular arrays are by far the most predominant because of their ease of implementation.The simplest from of T is when the neighborhood is of size 1×1 (that is, a single pixel). In this case, g depends only on the value of f at (x, y), and T becomes a gray-level (also called an intensity or mapping) transformation function of the form=(3.1-2)s T r()where, for simplicity in notation, r and s are variables denoting, respectively, the grey level of f(x, y) and g(x, y)at any point (x, y).Some fairly simple, yet powerful, processing approaches can be formulates with gray-level transformations. Because enhancement at any point in an image depends only on the grey level at that point, techniques in this category often are referred to as point processing.Larger neighborhoods allow considerably more flexibility. The general approach is to use a function of the values of f in a predefined neighborhood of (x, y) to determine the value of g at (x, y). One of the principal approaches in this formulation is based on the use of so-called masks (also referred to as filters, kernels, templates, or windows). Basically, a mask is a small (say, 3×3) 2-Darray, in which the values of the mask coefficients determine the nature of the type of approach often are referred to as mask processing or filtering. These concepts are discussed in Section 3.5.3.2 Some Basic Gray Level TransformationsWe begin the study of image enhancement techniques by discussing gray-level transformation functions. These are among the simplest of all image enhancement techniques. The values of pixels, before and after processing, will be denoted by r and s, respectively. As indicated in the previous section, these values are related by an expression of the from s = T(r), where T is a transformation that maps a pixel value r into a pixel value s. Since we are dealing with digital quantities, values of the transformation function typically are stored in a one-dimensional array and the mappings from r to s are implemented via table lookups. For an 8-bit environment, a lookup table containing the values of T will have 256 entries.As an introduction to gray-level transformations, which shows three basic types of functions used frequently for image enhancement: linear (negative and identity transformations), logarithmic (log and inverse-log transformations), and power-law (nth power and nth root transformations). The identity function is the trivial case in which out put intensities are identical to input intensities. It is included in the graph only for completeness.3.2.1 Image NegativesThe negative of an image with gray levels in the range [0, L-1]is obtained by using the negative transformation show shown, which is given by the expression=--(3.2-1)s L r1Reversing the intensity levels of an image in this manner produces the equivalent of a photographic negative. This type of processing is particularly suited for enhancing white or grey detail embedded in dark regions of an image, especiallywhen the black areas are dominant in size.3.2.2 Log TransformationsThe general from of the log transformation is=+(3.2-2)log(1)s c rWhere c is a constant, and it is assumed that r ≥0 .The shape of the log curve transformation maps a narrow range of low gray-level values in the input image into a wider range of output levels. The opposite is true of higher values of input levels. We would use a transformation of this type to expand the values of dark pixels in an image while compressing the higher-level values. The opposite is true of the inverse log transformation.Any curve having the general shape of the log functions would accomplish this spreading/compressing of gray levels in an image. In fact, the power-law transformations discussed in the next section are much more versatile for this purpose than the log transformation. However, the log function has the important characteristic that it compresses the dynamic range of image characteristics of spectra. It is not unusual to encounter spectrum values that range from 0 to 106 or higher. While processing numbers such as these presents no problems for a computer, image display systems generally will not be able to reproduce faithfully such a wide range of intensity values .The net effect is that a significant degree of detail will be lost in the display of a typical Fourier spectrum.3.2.3 Power-Law TransformationsPower-Law transformations have the basic froms crϒ=(3.2-3) Where c and y are positive constants .Sometimes Eq. (3.2-3) is written as to account for an offset (that is, a measurable output when the input is zero). However, offsets typically are an issue of display calibration and as a result they are normally ignored in Eq. (3.2-3). Plots of s versus r for various values of y are shown in Fig.3.6. As in the case of the log transformation, power-law curves with fractional values of y map a narrow range of dark input values into a wider range of output values, with theopposite being true for higher values of input levels. Unlike the log function, however, we notice here a family of possible transformation curves obtained simply by varying y. As expected, we see in Fig.3.6 that curves generated with values of y>1 have exactly the opposite effect as those generated with values of y<1. Finally, we note that Eq.(3.2-3) reduces to the identity transformation when c = y = 1.A variety of devices used for image capture, printing, and display respond according to as gamma[hence our use of this symbol in Eq.(3.2-3)].The process used to correct this power-law response phenomena is called gamma correction.Gamma correction is important if displaying an image accurately on a computer screen is of concern. Images that are not corrected properly can look either bleached out, or, what is more likely, too dark. Trying to reproduce colors accurately also requires some knowledge of gamma correction because varying the value of gamma correcting changes not only the brightness, but also the ratios of red to green to blue. Gamma correction has become increasingly important in the past few years, as use of digital images for commercial purposes over the Internet has increased. It is not Internet has increased. It is not unusual that images created for a popular Web site will be viewed by millions of people, the majority of whom will have different monitors and/or monitor settings. Some computer systems even have partial gamma correction built in. Also, current image standards do not contain the value of gamma with which an image was created, thus complicating the issue further. Given these constraints, a reasonable approach when storing images in a Web site is to preprocess the images with a gamma that represents in a Web site is to preprocess the images with a gamma that represents an “average” of the types of monitors and computer systems that one expects in the open market at any given point in time.3.2.4 Piecewise-Linear Transformation FunctionsA complementary approach to the methods discussed in the previous three sections is to use piecewise linear functions. The principal advantage of piecewise linear functions over the types of functions we have discussed thus far is that the form of piecewise functions can be arbitrarily complex. In fact, as we will see shortly, a practical implementation of some important transformations can be formulated onlyas piecewise functions. The principal disadvantage of piecewise functions is that their specification requires considerably more user input.Contrast stretchingOne of the simplest piecewise linear functions is a contrast-stretching transformation. Low-contrast images can result from poor illumination, lack of dynamic range in the imaging sensor, or even wrong setting of a lens aperture during image acquisition. The idea behind contrast stretching is to increase the dynamic range of the gray levels in the image being processed.Gray-level slicingHighlighting a specific range of gray levels in an image often is desired. Applications include enhancing features such as masses of water in satellite imagery and enhancing flaws in X-ray images. There are several ways of doing level slicing, but most of them are variations of two basic themes. One approach is to display a high value for all gray levels in the range of interest and a low value for all other gray levels.Bit-plane slicingInstead of highlighting gray-level ranges, highlighting the contribution made to total image appearance by specific bits might be desired. Suppose that each pixel in an image is represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from bit-plane 0 for the least significant bit to bit-plane 7 for the most significant bit. In terms of 8-bit bytes, plane 0 contains all the lowest order bits in the bytes comprising the pixels in the image and plane 7 contains all the high-order bits.3.3 Histogram ProcessingThe histogram of a digital image with gray levels in the range [0, L-1] is a discrete function , where is the kth gray level and is the number of pixels in the image having gray level . It is common practice to pixels in the image, denoted by n. Thus, a normalized histogram is given by , for , Loosely speaking, gives an estimate of the probability of occurrence of gray level . Note that the sum of all components of a normalized histogram is equal to 1.Histograms are the basis for numerous spatial domain processing techniques.Histogram manipulation can be used effectively for image enhancement, as shown in this section. In addition to providing useful image statistics, we shall see in subsequent chapters that the information inherent in histograms also is quite useful in other image processing applications, such as image compression and segmentation. Histograms are simple to calculate in software and also lend themselves to economic hardware implementations, thus making them a popular tool for real-time image processing.附录B第三章空间域图像增强增强的首要目标是处理图像,使其比原始图像格式和特定应用。
计算机英语实用教程(第四版)目录介绍
计算机英语实用教程(第四版)目录介绍《计算机英语实用教程(第四版)》是2012年出版的一本图书,作者是刘兆毓、郑家农。
今天店铺在这里为大家介绍计算机英语实用教程(第四版)的目录介绍,欢迎大家阅读!计算机英语实用教程(第四版):基本信息出版社: 清华大学出版社; 第4版 (2010年9月1日)丛书名: 普通高等教育“十一五”国家级规划教材,计算机系列教材平装: 255页语种:英语, 简体中文开本: 16ISBN: 7302227977, 9787302227977条形码: 9787302227977商品尺寸: 25.8 x 18.2 x 1.2 cm商品重量: 422 g品牌: 清华大学出版社ASIN: B00426BSDQ计算机英语实用教程(第四版):编辑推荐《计算机英语实用教程(第4版)》:一共出版了4个版本,累计销售超50万册,得到了广泛的采用。
第四版吸收了反映当前最新技术和应用的内容,共由3部分组成:计算机硬件、计算机软件和计算机应用。
书中对一些较难翻译和理解的句子、单词进行了注释:每一节后面列出关键词汇,给出练习题;每一章后面还列出反映最新技术的一篇阅读材料以及相关词汇;以便提高读者阅读计算机英文文献的水平。
书后附有各节习题答案及译文,供读者参考。
《计算机英语实用教程(第四版)》在保留《计算机英语》基本结构的基础上,进行了课程内容的重新组合.使得读者在学时较少的情况下.同样能够掌握涉及计算机技术基础、系统和应用等各个方面的计算机英语知识。
内容相对精简,学习难度适当下调。
高校教师或读者可以根据自身的实际情况从以上两个版本中选择适合自己的教材。
计算机英语实用教程(第四版):目录参考译文第1章计算机硬件1.1 计算机组成1.2 什么是处理器1.3 存储系统1.4 输入输出(I/O)系统1.5 总线和控制器第2章计算机操作系统2.1 操作系统概述2.2 WINDOWSXP和VISTA2.3 UNIX和LINUX第3章个人计算机(PC)的使用3.1 选择和设置PC系统3.2 使用PC3.3 PC系统维护第4章计算机网络4.1 计算机网络的体系结构4.2 局域网(LAN)4.3 广域网(WAN)第5章因特网5.1 因特网概述5.2 与因特网连接5.3 万维网浏览器和服务器5.4 网络安全第6章因特网应用6.1 电子商务6.2 什么是企业资源计划ERP6.3 关于因特网电话第7章程序设计语言7.1 计算机语言概述7.2 BASIC和可视BASIC7.3 C、C++和C#7.4 标记和脚本语言第8章软件工程8.1 软件开发生命周期模型8.2 需求分析8.3 软件设计和测试8.4 软件维护第9章数据库及其应用9.1 数据库管理系统(DBMS)和管理信息系统(MIS) 9.2 数据库是如何工作的9.3 万维网(WEB)与数据库第10章办公自动化软件10.1 办公自动化软件基本知识10.2 0FFICE WORD 200710.3 0FFICE EXCEL 200710.4 0FFICE POWERPOINT 2007第11章计算机图形学与图像处理技术11.1 引言11.2 图形软件11.3 图像处理操作的层次结构11.4 数字图像文件格式第12章多媒体12.1 什么是多媒体12.2 多媒体的用途12.3 多媒体技术第13章现代工业自动化软件13.1 概述13.2 CAD、CAM和CAE的应用13.3 制造资源计划、MRP-Ⅱ及其他。
数字图像处理外文翻译参考文献
数字图像处理外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Application Of Digital Image Processing In The MeasurementOf Casting Surface RoughnessAhstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parametersand the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.Keywords-casting surface; roughness measurement; image processing; feature parametersⅠ.INTRODUCTIONNowadays the demand for the quality and surface roughness of machining is highly increased, and the machine vision inspection based on image processing has become one of the hotspot of measuring technology in mechanical industry due to their advantages such as non-contact, fast speed, suitable precision, strong ability of anti-interference, etc [1,2]. As there is no laws about the casting surface and the range of roughness is wide, detection parameters just related to highly direction can not meet the current requirements of the development of the photoelectric technology, horizontal spacing or roughness also requires a quantitative representation. Therefore, the three-dimensional evaluation system of the casting surface roughness is established as the goal [3,4], surface roughness measurement based on image processing technology is presented. Image preprocessing is deduced through the image enhancement processing, the image binary conversation. The three-dimensional roughness evaluation based on the feature parameters is performed . An automatic detection interface of casting surface roughness based on MA TLAB is compiled which provides a solid foundation for the online and fast detection of casting surface roughness.II. CASTING SURFACE IMAGE ACQUISITION SYSTEMThe acquisition system is composed of the sample carrier, microscope, CCD camera, image acquisition card and the computer. Sample carrier is used to place tested castings. According to the experimental requirements, we can select a fixed carrier and the sample location can be manually transformed, or select curing specimens and the position of the sampling stage can be changed. Figure 1 shows the whole processing procedure.,Firstly,the detected castings should be placed in the illuminated backgrounds as far as possible, and then through regulating optical lens, setting the CCD camera resolution and exposure time, the pictures collected by CCD are saved to computer memory through the acquisition card. The image preprocessing and feature value extraction on casting surface based on corresponding software are followed. Finally the detecting result is output.III. CASTING SURFACE IMAGE PROCESSINGCasting surface image processing includes image editing, equalization processing, image enhancement and the image binary conversation,etc. The original and clipped images of the measured casting is given in Figure 2. In which a) presents the original image and b) shows the clipped image.A.Image EnhancementImage enhancement is a kind of processing method which can highlight certain image information according to some specific needs and weaken or remove some unwanted informations at the same time[5].In order to obtain more clearly contour of the casting surface equalization processing of the image namely the correction of the image histogram should be pre-processed before image segmentation processing. Figure 3 shows the original grayscale image and equalization processing image and their histograms. As shown in the figure, each gray level of the histogram has substantially the same pixel point and becomes more flat after gray equalization processing. The image appears more clearly after the correction and the contrast of the image is enhanced.Fig.2 Casting surface imageFig.3 Equalization processing imageB. Image SegmentationImage segmentation is the process of pixel classification in essence. It is a very important technology by threshold classification. The optimal threshold is attained through the instmction thresh = graythresh (II). Figure 4 shows the image of the binary conversation. The gray value of the black areas of the Image displays the portion of the contour less than the threshold (0.43137), while the white area shows the gray value greater than the threshold. The shadows and shading emerge in the bright region may be caused by noise or surface depression.Fig4 Binary conversationIV. ROUGHNESS PARAMETER EXTRACTIONIn order to detect the surface roughness, it is necessary to extract feature parameters of roughness. The average histogram and variance are parameters used to characterize the texture size of surface contour. While unit surface's peak area is parameter that can reflect the roughness of horizontal workpiece.And kurtosis parameter can both characterize the roughness of vertical direction and horizontal direction. Therefore, this paper establisheshistogram of the mean and variance, the unit surface's peak area and the steepness as the roughness evaluating parameters of the castings 3D assessment. Image preprocessing and feature extraction interface is compiled based on MATLAB. Figure 5 shows the detection interface of surface roughness. Image preprocessing of the clipped casting can be successfully achieved by this software, which includes image filtering, image enhancement, image segmentation and histogram equalization, and it can also display the extracted evaluation parameters of surface roughness.Fig.5 Automatic roughness measurement interfaceV. CONCLUSIONSThis paper investigates the casting surface roughness measuring method based on digital Image processing technology. The method is composed of image acquisition, image enhancement, the image binary conversation and the extraction of characteristic parameters of roughness casting surface. The interface of image preprocessing and the extraction of roughness evaluation parameters is compiled by MA TLAB which can provide a solid foundation for the online and fast detection of casting surface roughness.REFERENCE[1] Xu Deyan, Lin Zunqi. The optical surface roughness research pro gress and direction[1]. Optical instruments 1996, 18 (1): 32-37.[2] Wang Yujing. Turning surface roughness based on image measurement [D]. Harbin:Harbin University of Science and Technology[3] BRADLEY C. Automated surface roughness measurement[1]. The InternationalJournal of Advanced Manufacturing Technology ,2000,16(9) :668-674.[4] Li Chenggui, Li xing-shan, Qiang XI-FU 3D surface topography measurement method[J]. Aerospace measurement technology, 2000, 20(4): 2-10.[5] Liu He. Digital image processing and application [ M]. China Electric Power Press,2005译文:数字图像处理在铸件表面粗糙度测量中的应用摘要—本文提出了一种表面图像采集基于数字图像处理技术的系统。
CCD图像图像处理外文文献翻译、中英文翻译、外文翻译
附录附录1翻译部分Raw CCD images are exceptional but not perfect. Due to the digital nature of the data many of the imperfections can be compensated for or calibrated out of the final image through digital image processing.Composition of a Raw CCD Image.A raw CCD image consists of the following signal components:IMAGE SIGNAL - The signal from the source.Electrons are generated from the actual source photons.BIAS SIGNAL - Initial signal already on the CCD before the exposure is taken. This signal is due to biasing the CCD offset slightly above zero A/D counts (ADU).THERMAL SIGNAL - Signal (Dark Current thermal electrons) due to the thermal activity of the semiconductor. Thermal signal is reduced by cooling of the CCD to low temperature.Sources of NoiseCCD images are susceptible to the following sources of noise:PHOTON NOISE - Random fluctuations in the photon signal of the source. The rate at which photons are received is not constant.THERMAL NOISE - Statistical fluctuations in the generation of Thermal signal. The rate at which electrons are produced in the semiconductor substrate due to thermal effects is not constant.READOUT NOISE - Errors in reading the signal; generally dominated by theon-chip amplifier.QUANTIZATION NOISE - Errors introduced in the A/D conversion process.SENSITIVITY VARIATION - Sensitivity variations from photosite to photosite on the CCD detector or across the detector. Modern CCD's are uniform to better than 1%between neighboring photosites and uniform to better than 10% across the entire surface.Noise CorrectionsREDUCING NOISE - Readout Noise and Quantization Noise are limited by the construction of the CCD camera and can not be improved upon by the user. Thermal Noise, however, can be reduced by cooling of the CCD (temperature regulation). The Sensitivity Variations can be removed by proper flat fielding.CORRECTING FOR THE BIAS AND THERMAL SIGNALS - The Bias and Thermal signals can be subtracted out from the Raw Image by taking what is called a Dark Exposure. The dark exposure is a measure of the Bias Signal and Thermal Signal and may simply be subtracted from the Raw Image.FLAT FIELDING -A record of the photosite to photosite sensitivity variations can be obtained by taking an exposure of a uniformly lit 'flat field". These variations can then be divided out of the Raw Image to produce an image essentially free from this source of error. Any length exposure will do, but ideally one which saturates the pixels to the 50% or 75% level is best.The Final Processed ImageThe final Processed Image which removes unwanted signals and reduces noise as best we can is computed as follows:Final Processed Image = (Raw - Dark)/FlatAll of the digital image processing functions described above can be accomplished by using CCDOPS software furnished with each SBIG imaging camera. The steps to accomplish them are described in the Operating Manual furnished with each SBIG imaging camera. At SBIG we offer our technical support to help you with questions on how to improve your images.HOW TO SELECT THE CORRECT CCD IMAGING CAMERA FOR YOUR TELESCOPEWhen new customers contact SBIG we discuss their imaging camera application. We try to get an idea of their interests. We have found this method is an effective way of insuring that our customers get the right imaging camera for their purposes. Someof the questions we ask are as follows:What type of telescope do you presently own? Having this information allows us to match the CCD imaging Camera's parameters, pixel size and field of view to your telescope. We can also help you interface the CCD imaging camera's automatic guiding functions to your telescope.Are you a MAC or PC user? Since our software supports both of these platforms we can insure that you receive the correct software. We can also answer questions about any unique functions in one or the other. We can send you a demonstration copy of the appropriate software for your review.Do you have a telescope drive base with an autoguider port? Do you want to operate from a remote computer? Companies like Software Bisque fully support our products with telescope control and imaging camera software.Do you want to take photographic quality images of deep space objects, image planets, or perform wide field searches for near earth asteroids or supernovas? In learning about your interests we can better guide you to the optimum CCD pixel size and imaging area for the application.Do you want to make photometric measurements of variable stars or determine precise asteroid positions? From this information we can recommend a CCD imaging camera model and explain how to use the specific analysis functions to perform these tasks. We can help you characterize your imaging camera by furnishing additional technical data.Do you want to automatically guide long uninterrupted astrophotographs? As the company with the most experience in CCD autoguiding we can help you install and operate a CCD autoguider on your telescope. The Model STV has a worldwide reputation for accurate guiding on dim guide stars. No matter what type of telescope you own we can help you correctly interface it and get it working properly.SBIG CCD IMAGING CAMERASThe SBIG product line consists of a series of thermoelectrically cooled CCD imaging cameras designed for a wide range of applications ranging from astronomy, tricolor imaging, color photometry, spectroscopy, medical imaging, densitometry, to chemiluminescence and epifluorescence imaging, etc. This catalog includes information on astronomical imaging cameras, scientific imaging cameras,autoguiding, and accessories. We have tried to arrange the catalog so that it is easy to compare products by specifications and performance. The tables in the product section compare some of the basic characteristics on each CCD imaging camera in our product line. You will find a more detailed set of specifications with each individual imaging camera description.HOW TO GET STARTED USING YOUR CCD IMAGING CAMERAIt all starts with the software. If there's any company well known for its outstanding imaging camera software it's SBIG. Our CCDOPS Operating Software is well known for its user oriented camera control features and stability. CCDOPS is available for free download from our web site along with sample images that you can display and analyze using the image processing and analysis functions of the CCDOPS software. You can become thoroughly familiar with how our imaging cameras work and the capabilities of the software before you purchase an imaging camera. We also include CCDSoftV5 and TheSky from Software Bisque with most of our cameras at no additional charge. Macintosh users receive a free copy of EquinoX planetarium and camera control software for the MacOS-X operating system. No other manufacturer offers better software than you get with SBIG cameras. New customers receiving their CCD imaging camera should first read the installation section in their CCDOPS Operating Manual. Once you have read that section you should have no difficulty installing CCDOPS software on your hard drive, connecting the USB cable from the imaging camera to your computer, initiating the imaging camera and within minutes start taking your first CCD images. Many of our customers are amazed at how easy it is to start taking images. Additional information can be found by reading the image processing sections of the CCDOPS and CCDSoftV5 Manuals. This information allows you to progress to more advanced features such as automatic dark frame subtraction of images, focusing the imaging camera, viewing, analyzing and processing the images on the monitor, co-adding images, taking automatic sequences of images, photometric and astrometric measurements, etc.A PERSONAL TOUCH FROM SBIGAt SBIG we have had much success with a program in which we continually review customer's images sent to us on disk or via e-mail. We can often determine the cause of a problem from actual images sent in by a user. We review the images and contacteach customer personally. Images displaying poor telescope tracking, improper imaging camera focus, oversaturated images, etc., are typical initial problems. We will help you quickly learn how to improve your images. You can be assured of personal technical support when you need it. The customer support program has furnished SBIG with a large collection of remarkable images. Many customers have had their images published in SBIG catalogs, ads, and various astronomy magazines. We welcome the chance to review your images and hope you will take advantage of our trained staff to help you improve your images.TRACK AND ACCUMULATE (U.S. Patent # 5,365,269)Using an innovative engineering approach SBIG developed an imaging camera function called Track & Accumulate (TRACCUM) in which multiple images are automatically registered to create a single long exposure. Since the long exposure consists of short images the total combined exposure significantly improves resolution by reducing the cumulative telescope periodic error. In the TRACCUM mode each image is shifted to correct guiding errors and added to the image buffer. In this mode the telescope does not need to be adjusted. The great sensitivity of the CCD virtually guarantees that there will be a usable guide star within the field of view. This feature provides dramatic improvement in resolution by reducing the effect of periodic error and allowing unattended hour long exposures. SBIG has been granted U.S. Patent # 5,365,269 for Track & Accumulate.DUAL CCD SELF-GUIDING (U.S. Patent # 5,525,793)In 1994 with the introduction of Models ST-7 and ST-8 CCD Imaging Cameras which incorporate two separate CCD detectors, SBIG was able to accomplish the goal of introducing a truly self-guided CCD imaging camera. The ability to select guide stars with a separate CCD through the full telescope aperture is equivalent to having a thermoelectrically cooled CCD autoguider in your imaging camera. This feature has been expanded to all dual sensor ST series cameras (ST-7/8/9/10/2000) and all STL series cameras (STL-1001/1301/4020/6303/11000). One CCD is used for guiding and the other for collecting the image. They are mounted in close proximity, both focused at the same plane, allowing the imaging CCD to integrate while the PC uses the guiding CCD to correct the telescope. Using a separate CCD for guiding allows 100% of the primary CCD's active area to be used to collect the image. The telescope correction rate and limiting guide star magnitude can be independentlyselected. Tests at SBIG indicated that 95% of the time a star bright enough for guiding will be found on a TC237 tracking CCD without moving the telescope, using an f/6.3 telescope. The self-guiding function quickly established itself as the easiest and most accurate method for guiding CCD images. Placing both detectors in close proximity at the same focal plane insures the best possible guiding. Many of the long integrated exposures now being published are taken with this self-guiding method, producing very high resolution images of deep space objects. SBIG has been granted U.S. Patent # 5,525,793 for the dual CCD Self-Guiding function.COMPUTER PLATFORMSSBIG has been unique in its support of both PC and Macintosh platforms for our cameras. The imaging cameras in this catalog communicate with the host computer through standard serial or USB ports depending on the specific models. Since there are no external plug-in boards required with our imaging camera systems we encourage users to operate with the new family of high resolution graphics laptop computers. We furnish Operating Software for you to install on your host computer. Once the software is installed and communication with the imaging camera is set up complete control of all of the imaging camera functions is through the host computer keyboard. The recommended minimum requirements for memory and video graphics are as shown below.GENERAL CONCLUSION(1) of this item from the theoretical analysis of the use of CCD technology for real-time non-contact measuring the diameter of the feasibility of measuring it is fast, efficient, accurate, high degree of automation, off-production time and so on.(2) projects to test the use of CCD technology to achieve real-time, online non-contact measurement, developed by the CCD-line non-contact diameter measurement system has a significant technology advanced and practical application of significance. (3) from the theoretical and experimental project on the summary of the utilization of CCD technology developed by SCM PV systems improve the measurement accuracy of several ways: improving crystal, a multi-pixel CCD devices and take full advantage of CCD-like device Face width.译文原料CCD图像是例外,但并非十全十美。
图像处理外文翻译
英文资料翻译Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up for the edge, and then analyzing and processing until the probably area of license plate is extracted.The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast of the image may require improvement.Commonly,too,coordinate transformations are needed torestore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image (or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to a label image.Now that we know the exact geometrical shape of the object,we can extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now be treated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the sameknowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft V oyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the public domain.Classical image processing techniques are routinely employed by graphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.中文翻译图像处理不是一步就能完成的过程。
matlab图像处理外文翻译外文文献
matlab图像处理外文翻译外文文献附录A 英文原文Scene recognition for mine rescue robotlocalization based on visionCUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)Abstract:A new scene recognition system was presented based on fuzzy logic and hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates. By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is, and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The contributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localization in both static and dynamic mine environments.Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model1 IntroductionSearch and rescue in disaster area in the domain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and effectiveness, scene recognition becomes one of the important technologies of topological localization.Currently most scene recognition methods are based on global image features and have two distinct stages: training offline and matching online.。
图像处理中英文对照外文翻译文献
中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:基于局部二值模式多分辨率的灰度和旋转不变性的纹理分类摘要:本文描述了理论上非常简单但非常有效的,基于局部二值模式的、样图的非参数识别和原型分类的,多分辨率的灰度和旋转不变性的纹理分类方法。
此方法是基于结合某种均衡局部二值模式,是局部图像纹理的基本特性,并且已经证明生成的直方图是非常有效的纹理特征。
我们获得一个一般灰度和旋转不变的算子,可表达检测有角空间和空间结构的任意量子化的均衡模式,并提出了结合多种算子的多分辨率分析方法。
根据定义,该算子在图像灰度发生单一变化时具有不变性,所以所提出的方法在灰度发生变化时是非常强健的。
另一个优点是计算简单,算子在小邻域内或同一查找表内只要几个操作就可实现。
在旋转不变性的实际问题中得到了良好的实验结果,与来自其他的旋转角度的样品一起以一个特别的旋转角度试验而且测试得到分类, 证明了基于简单旋转的发生统计学的不变性二值模式的分辨是可以达成。
这些算子表示局部图像纹理的空间结构的又一特色是,由结合所表示的局部图像纹理的差别的旋转不变量不一致方法,其性能可得到进一步的改良。
这些直角的措施共同证明了这是旋转不变性纹理分析的非常有力的工具。
关键词:非参数的,纹理分析,Outex ,Brodatz ,分类,直方图,对比度2 灰度和旋转不变性的局部二值模式我们通过定义单色纹理图像的一个局部邻域的纹理T ,如 P (P>1)个象素点的灰度级联合分布,来描述灰度和旋转不变性算子:01(,,)c P T t g g g -= (1)其中,g c 为局部邻域中心像素点的灰度值,g p (p=0,1…P-1)为半径R(R>0)的圆形邻域内对称的空间象素点集的灰度值。
图1如果g c 的坐标是(0,0),那么g p 的坐标为(cos sin(2/),(2/))R R p P p P ππ-。
图1举例说明了圆形对称邻域集内各种不同的(P,R )。
数字图像处理英文文献翻译参考 精品
…………………………………………………装………………订………………线…………………………………………………………………Hybrid Genetic Algorithm Based Image EnhancementTechnologyMu Dongzhou Department of the Information Engineering XuZhou College of Industrial TechnologyXuZhou, China ****************.cnXu Chao and Ge Hongmei Department of the Information Engineering XuZhou College of Industrial TechnologyXuZhou, China ***************.cn,***************.cnAbstract—in image enhancement, Tubbs proposed a normalized incomplete Beta function to represent several kinds of commonly used non-linear transform functions to do the research on image enhancement. But how to define the coefficients of the Beta function is still a problem. We proposed a Hybrid Genetic Algorithm which combines the Differential Evolution to the Genetic Algorithm in the image enhancement process and utilize the quickly searching ability of the algorithm to carry out the adaptive mutation and searches. Finally we use the Simulation experiment to prove the effectiveness of the method.Keywords- Image enhancement; Hybrid Genetic Algorithm; adaptive enhancementI. INTRODUCTIONIn the image formation, transfer or conversion process, due to other objective factors such as system noise, inadequate or excessive exposure, relative motion and so the impact will get the image often a difference between the original image (referred to as degraded or degraded) Degraded image is usually blurred or after the extraction of information through the machine to reduce or even wrong, it must take some measures for its improvement.Image enhancement technology is proposed in this sense, and the purpose is to improve the image quality. Fuzzy Image Enhancement situation according to the image using a variety of special technical highlights some of the information in the image, reduce or eliminate the irrelevant information, to emphasize the image of the whole or the purpose of local features. Image enhancement method is still no unified theory, image enhancement techniques can be divided into three categories: point operations, and spatial frequency enhancement methods Enhancement Act. This paper presents an automatic adjustment according to the image characteristics of adaptive image enhancement method that called hybrid genetic algorithm. It combines the differential evolution algorithm of adaptive search capabilities, automatically determines the transformation function of the parameter values in order to achieve adaptive image enhancement.…………………………………………………装………………订………………线…………………………………………………………………II. IMAGE ENHANCEMENT TECHNOLOGYImage enhancement refers to some features of the image, such as contour, contrast, emphasis or highlight edges, etc., in order to facilitate detection or further analysis and processing. Enhancements will not increase the information in the image data, but will choose the appropriate features of the expansion of dynamic range, making these features more easily detected or identified, for the detection and treatment follow-up analysis and lay a good foundation.Image enhancement method consists of point operations, spatial filtering, and frequency domain filtering categories. Point operations, including contrast stretching, histogram modeling, and limiting noise and image subtraction techniques. Spatial filter including low-pass filtering, median filtering, high pass filter (image sharpening). Frequency filter including homomorphism filtering, multi-scale multi-resolution image enhancement applied [1].III. DIFFERENTIAL EVOLUTION ALGORITHMDifferential Evolution (DE) was first proposed by Price and Storn, and with other evolutionary algorithms are compared, DE algorithm has a strong spatial search capability, and easy to implement, easy to understand. DE algorithm is a novel search algorithm, it is first in the search space randomly generates the initial population and then calculate the difference between any two members of the vector, and the difference is added to the third member of the vector, by which Method to form a new individual. If you find that the fitness of new individual members better than the original, then replace the original with the formation of individual self.The operation of DE is the same as genetic algorithm, and it conclude mutation, crossover and selection, but the methods are different. We suppose that the group size is P, the vector dimension is D, and we can express the object vector as (1):xi=[xi1,xi2,…,xiD] (i =1,…,P)(1) And the mutation vector can be expressed as (2):()321rrriXXFXV-⨯+=i=1,...,P (2) 1rX,2rX,3rX are three randomly selected individuals from group, and r1≠r2≠r3≠i.F is a range of [0, 2] between the actual type constant factor difference vector is used to control the influence, commonly referred to as scaling factor. Clearly the difference between the vector and the smaller the disturbance also smaller, which means that if groups close to the optimum value, the disturbance will be automatically reduced.DE algorithm selection operation is a "greedy " selection mode, if and only if the new vector ui the fitness of the individual than the target vector is better when the individual xi, ui will be retained to the next group. Otherwise, the target vector xi individuals remain in the original group, once again as the next generation of the parent vector.…………………………………………………装………………订………………线…………………………………………………………………IV. HYBRID GA FOR IMAGE ENHANCEMENT IMAGEenhancement is the foundation to get the fast object detection, so it is necessary to find real-time and good performance algorithm. For the practical requirements of different systems, many algorithms need to determine the parameters and artificial thresholds. Can use a non-complete Beta function, it can completely cover the typical image enhancement transform type, but to determine the Beta function parameters are still many problems to be solved. This section presents a Beta function, since according to the applicable method for image enhancement, adaptive Hybrid genetic algorithm search capabilities, automatically determines the transformation function of the parameter values in order to achieve adaptive image enhancement.The purpose of image enhancement is to improve image quality, which are more prominent features of the specified restore the degraded image details and so on. In the degraded image in a common feature is the contrast lower side usually presents bright, dim or gray concentrated. Low-contrast degraded image can be stretched to achieve a dynamic histogram enhancement, such as gray level change. We use Ixy to illustrate the gray level of point (x, y) which can be expressed by (3).Ixy=f(x, y) (3) where: “f” is a linear or nonline ar function. In general, gray image have four nonlinear translations [6] [7] that can be shown as Figure 1. We use a normalized incomplete Beta function to automatically fit the 4 categories of image enhancement transformation curve. It defines in (4):()()()()10,01,111<<-=---⎰βαβαβαdtttBufu(4) where:()()⎰---=1111,dtttBβαβα(5) For different value of α and β, we can get response curve from (4) and (5).The hybrid GA can make use of the previous section adaptive differential evolution algorithm to search for the best function to determine a value of Beta, and then each pixel grayscale values into the Beta function, the corresponding transformation of Figure 1, resulting in ideal image enhancement. The detail description is follows:Assuming the original image pixel (x, y) of the pixel gray level by the formula (4),denoted byxyi,()Ω∈yx,, here Ω is the image domain. Enhanced image is denoted by Ixy. Firstly, the image gray value normalized into [0, 1] by (6).minmaxminiiiig xyxy--=(6)where:maxi andm ini express the maximum and minimum of image gray relatively.Define the nonlinear transformation function f(u) (0≤u≤1) to transform source image…………………………………………………装………………订………………线…………………………………………………………………Finally, we use the hybrid genetic algorithm to determine the appropriate Beta function f (u) the optimal parameters αand β. Will enhance the image Gxy transformed antinormalized.V. EXPERIMENT AND ANALYSISIn the simulation, we used two different types of gray-scale images degraded; the program performed 50 times, population sizes of 30, evolved 600 times. The results show that the proposed method can very effectively enhance the different types of degraded image.Figure 2, the size of the original image a 320 × 320, it's the contrast to low, and some details of the more obscure, in particular, scarves and other details of the texture is not obvious, visual effects, poor, using the method proposed in this section, to overcome the above some of the issues and get satisfactory image results, as shown in Figure 5 (b) shows, the visual effects have been well improved. From the histogram view, the scope of the distribution of image intensity is more uniform, and the distribution of light and dark gray area is more reasonable. Hybrid genetic algorithm to automatically identify the nonlinear…………………………………………………装………………订………………线…………………………………………………………………transformation of the function curve, and the values obtained before 9.837,5.7912, from the curve can be drawn, it is consistent with Figure 3, c-class, that stretch across the middle region compression transform the region, which were consistent with the histogram, the overall original image low contrast, compression at both ends of the middle region stretching region is consistent with human visual sense, enhanced the effect of significantly improved.Figure 3, the size of the original image a 320 × 256, the overall intensity is low, the use of the method proposed in this section are the images b, we can see the ground, chairs and clothes and other details of the resolution and contrast than the original image has Improved significantly, the original image gray distribution concentrated in the lower region, and the enhanced image of the gray uniform, gray before and after transformation and nonlinear transformation of basic graph 3 (a) the same class, namely, the image Dim region stretching, and the values were 5.9409,9.5704, nonlinear transformation of images degraded type inference is correct, the enhanced visual effect and good robustness enhancement.Difficult to assess the quality of image enhancement, image is still no common evaluation criteria, common peak signal to noise ratio (PSNR) evaluation in terms of line, but the peak signal to noise ratio does not reflect the human visual system error. Therefore, we use marginal protection index and contrast increase index to evaluate the experimental results.Edgel Protection Index (EPI) is defined as follows:…………………………………………………装………………订………………线…………………………………………………………………(7)Contrast Increase Index (CII) is defined as follows:minmaxminmax,GGGGCCCEOD+-==(8)In figure 4, we compared with the Wavelet Transform based algorithm and get the evaluate number in TABLE I.Figure 4 (a, c) show the original image and the differential evolution algorithm for enhanced results can be seen from the enhanced contrast markedly improved, clearer image details, edge feature more prominent. b, c shows the wavelet-based hybrid genetic algorithm-based Comparison of Image Enhancement: wavelet-based enhancement method to enhance image detail out some of the image visual effect is an improvement over the original image, but the enhancement is not obvious; and Hybrid genetic algorithm based on adaptive transform image enhancement effect is very good, image details, texture, clarity is enhanced compared with the results based on wavelet transform has greatly improved the image of the post-analytical processing helpful. Experimental enhancement experiment using wavelet transform "sym4" wavelet, enhanced differential evolution algorithm experiment, the parameters and the values were 5.9409,9.5704. For a 256 × 256 size image transform based on adaptive hybrid genetic algorithm in Matlab 7.0 image enhancement software, the computing time is about 2 seconds, operation is very fast. From TABLE I, objective evaluation criteria can be seen, both the edge of the protection index, or to enhance the contrast index, based on adaptive hybrid genetic algorithm compared to traditional methods based on wavelet transform has a larger increase, which is from This section describes the objective advantages of the method. From above analysis, we can see…………………………………………………装………………订………………线…………………………………………………………………that this method.From above analysis, we can see that this method can be useful and effective.VI. CONCLUSIONIn this paper, to maintain the integrity of the perspective image information, the use of Hybrid genetic algorithm for image enhancement, can be seen from the experimental results, based on the Hybrid genetic algorithm for image enhancement method has obvious effect. Compared with other evolutionary algorithms, hybrid genetic algorithm outstanding performance of the algorithm, it is simple, robust and rapid convergence is almost optimal solution can be found in each run, while the hybrid genetic algorithm is only a few parameters need to be set and the same set of parameters can be used in many different problems. Using the Hybrid genetic algorithm quick search capability for a given test image adaptive mutation, search, to finalize the transformation function from the best parameter values. And the exhaustive method compared to a significant reduction in the time to ask and solve the computing complexity. Therefore, the proposed image enhancement method has some practical value.REFERENCES[1] HE Bin et al., Visual C++ Digital Image Processing [M], Posts & Telecom Press,2001,4:473~477[2] Storn R, Price K. Differential Evolution—a Simple and Efficient Adaptive Scheme forGlobal Optimization over Continuous Space[R]. International Computer Science Institute, Berlaey, 1995.[3] Tubbs J D. A note on parametric image enhancement [J].Pattern Recognition.1997,30(6):617-621.[4] TANG Ming, MA Song De, XIAO Jing. Enhancing Far Infrared Image Sequences withModel Based Adaptive Filtering [J] . CHINESE JOURNAL OF COMPUTERS, 2000, 23(8):893-896.[5] ZHOU Ji Liu, LV Hang, Image Enhancement Based on A New Genetic Algorithm [J].Chinese Journal of Computers, 2001, 24(9):959-964.[6] LI Yun, LIU Xuecheng. On Algorithm of Image Constract Enhancement Based onWavelet Transformation [J]. Computer Applications and Software, 2008,8.[7] XIE Mei-hua, WANG Zheng-ming, The Partial Differential Equation Method for ImageResolution Enhancement [J]. Journal of Remote Sensing, 2005,9(6):673-679.…………………………………………………装………………订………………线…………………………………………………………………基于混合遗传算法的图像增强技术Mu Dongzhou 徐州工业职业技术学院信息工程系 XuZhou, China****************.cnXu Chao and Ge Hongmei 徐州工业职业技术学院信息工程系 XuZhou,********************.cn,***************.cn摘要—在图像增强之中,塔布斯提出了归一化不完全β函数表示常用的几种使用的非线性变换函数对图像进行研究增强。
数字图像处理论文中英文对照资料外文翻译文献
第 1 页中英文对照资料外文翻译文献原 文To image edge examination algorithm researchAbstract :Digital image processing took a relative quite young discipline,is following the computer technology rapid development, day by day obtains th widespread application.The edge took the image one kind of basic characteristic,in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widesp application.Image edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develo the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainlyhas Robert, Laplacian, Sobel, Canny, operators and so on LOG 。
FPGA图像处理中英文对照外文翻译文献
中英文资料对照外文翻译(文档含英文原文和中文翻译)基于FPGA的快速图像处理系统的设计摘要我们评估、改进硬件、软件架构的性能,目的是为了适应各种不同的图像处理任务。
这个系统架构采用基于现场可编程门阵列(FPGA)和主机电脑。
PC端安装Lab VIEW应用程序,用于控制图像采集和工业相机的视频捕获。
通过USB2.0传输协议执行传输。
FPGA控制器是基于ALTERA的Cyclone II 芯片,其作用是作为一个系统级可编程芯片(SOPC)嵌入NIOSII内核。
该SOPC集成了CPU,片内、外部内存,传输信道,和图像数据处理系统。
采用标准的传输协议和通过软硬件逻辑来调整各种帧的大小。
与其他解决方案作比较,对其一系列的应用进行讨论。
关键词:软件/硬件联合设计;图像处理;FPGA;嵌入式1、导言传统的硬件实现图像处理一般采用DSP或专用的集成电路(ASIC)。
然而,随着对更高的速度和更低的成本的追求,其解决方案转移到了现场可编程门阵列(FPGA)身上。
FPGA具有并行处理的特性以及更好的性能。
当一个程序需要实时处理,如视频或电视信号的处理,机械操纵时,要求非常严格,FPGA 可以更好的去执行。
当需要严格的计算功能时,如滤波、运动估算、二维离散余弦变换(二维DCTs )和快速傅立叶变换(FFTs )时,FPGA能够更好地优化。
在功能上,FPGA更多的硬件乘法器、更大的内存容量、更高的系统集成度,轻而易举地超越了传统的DSP。
以计算机为基础的成像技术的应用和基于FPGA的并行控制器,这需要生成一个软硬件接口来进行高速传输。
本系统是一个典型的软硬件混合设计产品,其中包括电脑主机中运行的LvbVIEW进行成像,配备了摄像头和帧采集,在另一端的Altera的FPGA开发板上运行图像滤波器和其他系统组件。
图像数据通过USB2.0进行高速传输。
各硬件部件和FPGA板的控制部分通过嵌入的NIOSII处理器进行关联,并利用USB2.0作为沟通渠道。
数字图像处理英文原版及翻译
Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception.An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pixels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields asingle number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and high level processes. Low-level processes involve primitive opera- tions such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higher level processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting(segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.”As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to theother.Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good”enhancement result.Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it.Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image , such as the jpg used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a longway toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig 2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as simple as detailing regions of an image where theinformation of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig 2 above by the use of double-headed arrows between the processing modules and the knowledge base, as opposed to single-headed arrows linking the processing modules.Edge detectionEdge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge detection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal blur caused by a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges.A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there maytherefore usually be one edge on each side of the line.To illustrate why edge detection is not a trivial task, let us consider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian of the zero-crossings of a non-linear differential expression, as will be described in the section on differential edge detection following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction).The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholdingparameters, and suitable thresholding values may vary over the image.Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an edge.The term edge segment generally is used if the edge is short in relation to the dimensions of the image.A key problem in segmentation is to assemble edge segments into longer edges.An alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second derivative.The definition of an edge in this case is the same as above.It is important to note that these definitions do not guarantee success in finding edge in an image.They simply give us a formalism to look for them.First-order derivatives in an image are computed using the gradient.Second-order derivatives are obtained using the Laplacian.数字图像处理和边缘检测数字图像处理在数字图象处理方法的兴趣从两个主要应用领域的茎:改善人类解释图像信息;和用于存储,传输,和表示用于自主机器感知图像数据的处理。
数字图像处理英文原版及翻译
数字图象处理英文原版及翻译Digital Image Processing: English Original Version and TranslationIntroduction:Digital Image Processing is a field of study that focuses on the analysis and manipulation of digital images using computer algorithms. It involves various techniques and methods to enhance, modify, and extract information from images. In this document, we will provide an overview of the English original version and translation of digital image processing materials.English Original Version:The English original version of digital image processing is a comprehensive textbook written by Richard E. Woods and Rafael C. Gonzalez. It covers the fundamental concepts and principles of image processing, including image formation, image enhancement, image restoration, image segmentation, and image compression. The book also explores advanced topics such as image recognition, image understanding, and computer vision.The English original version consists of 14 chapters, each focusing on different aspects of digital image processing. It starts with an introduction to the field, explaining the basic concepts and terminology. The subsequent chapters delve into topics such as image transforms, image enhancement in the spatial domain, image enhancement in the frequency domain, image restoration, color image processing, and image compression.The book provides a theoretical foundation for digital image processing and is accompanied by numerous examples and illustrations to aid understanding. It also includes MATLAB codes and exercises to reinforce the concepts discussed in each chapter. The English original version is widely regarded as a comprehensive and authoritative reference in the field of digital image processing.Translation:The translation of the digital image processing textbook into another language is an essential task to make the knowledge and concepts accessible to a wider audience. The translation process involves converting the English original version into the target language while maintaining the accuracy and clarity of the content.To ensure a high-quality translation, it is crucial to select a professional translator with expertise in both the source language (English) and the target language. The translator should have a solid understanding of the subject matter and possess excellent language skills to convey the concepts accurately.During the translation process, the translator carefully reads and comprehends the English original version. They then analyze the text and identify any cultural or linguistic nuances that need to be considered while translating. The translator may consult subject matter experts or reference materials to ensure the accuracy of technical terms and concepts.The translation process involves several stages, including translation, editing, and proofreading. After the initial translation, the editor reviews the translated text to ensure its coherence, accuracy, and adherence to the target language's grammar and style. The proofreader then performs a final check to eliminate any errors or inconsistencies.It is important to note that the translation may require adapting certain examples, illustrations, or exercises to suit the target language and culture. This adaptation ensures that the translated version resonates with the local audience and facilitates better understanding of the concepts.Conclusion:Digital Image Processing: English Original Version and Translation provides a comprehensive overview of the field of digital image processing. The English original version, authored by Richard E. Woods and Rafael C. Gonzalez, serves as a valuable reference for understanding the fundamental concepts and techniques in image processing.The translation process plays a crucial role in making this knowledge accessible to non-English speakers. It involves careful selection of a professional translator, thoroughunderstanding of the subject matter, and meticulous translation, editing, and proofreading stages. The translated version aims to accurately convey the concepts while adapting to the target language and culture.By providing both the English original version and its translation, individuals from different linguistic backgrounds can benefit from the knowledge and advancements in digital image processing, fostering international collaboration and innovation in this field.。
计算机图像图形外文翻译外文文献英文文献图像分割
原文出处Digital Image Processing 2/E图像分割前一章的资料使我们所研究的图像处理方法开始发生了转变。
从输人输出均为图像的处理方法转变为输人为图像而输出为从这些图像中提取出来的属性的处理方法〔这方面在1.1节中定义过)。
图像分割是这一方向的另一主要步骤。
分割将图像细分为构成它的子区域或对象。
分割的程度取决于要解决的问题。
就是说当感兴趣的对象已经被分离出来时就停止分割。
例如,在电子元件的自动检测方面,我们关注的是分析产品的图像,检测是否存在特定的异常状态,比如,缺失的元件或断裂的连接线路。
超过识别这此元件所需的分割是没有意义的。
异常图像的分割是图像处理中最困难的任务之一。
精确的分割决定着计算分析过程的成败。
因此,应该特别的关注分割的稳定性。
在某些情况下,比如工业检测应用,至少有可能对环境进行适度控制的检测。
有经验的图像处理系统设计师总是将相当大的注意力放在这类可能性上。
在其他应用方面,比如自动目标采集,系统设计者无法对环境进行控制。
所以,通常的方法是将注意力集中于传感器类型的选择上,这样可以增强获取所关注对象的能力,从而减少图像无关细节的影响。
一个很好的例子就是,军方利用红外线图像发现有很强热信号的目标,比如移动中的装备和部队。
图像分割算法一般是基于亮度值的不连续性和相似性两个基本特性之一。
第一类性质的应用途径是基于亮度的不连续变化分割图像,比如图像的边缘。
第二类的主要应用途径是依据事先制定的准则将图像分割为相似的区域,门限处理、区域生长、区域分离和聚合都是这类方法的实例。
本章中,我们将对刚刚提到的两类特性各讨论一些方法。
我们先从适合于检测灰度级的不连续性的方法展开,如点、线和边缘。
特别是边缘检测近年来已经成为分割算法的主题。
除了边缘检测本身,我们还会讨论一些连接边缘线段和把边缘“组装”为边界的方法。
关于边缘检测的讨论将在介绍了各种门限处理技术之后进行。
门限处理也是一种人们普遍关注的用于分割处理的基础性方法,特别是在速度因素占重要地位的应用中。
图像处理的一般流程
图像处理的一般流程英文回答:Image processing involves multiple steps that generally follow a defined process:1. Image Acquisition: The first step is acquiring the image, which can be done using various imaging devices like cameras or scanners.2. Preprocessing: This stage involves preparing the image for further processing. It may include operationslike noise reduction, contrast enhancement, and resizing.3. Segmentation: Image segmentation aims to divide the image into different regions or objects based on their visual properties. This helps in isolating specific features of interest.4. Feature Extraction: Features are specificcharacteristics or patterns extracted from the segmented image. These features can be numerical values representing shape, texture, or other properties.5. Classification or Recognition: This step involves categorizing the image into predefined classes or recognizing specific objects within the image. It usually involves comparing the extracted features with a databaseor model.6. Post-Processing: After classification or recognition, post-processing steps may be applied. These include operations like image reconstruction or enhancement to improve the final result.中文回答:图像处理的一般流程如下:1. 图像获取,使用相机或扫描仪等成像设备获取图像。
医学图像处理英文讲课
Introduction Color image processing Color models Color image procLeabharlann ssing methods13
Introduction Image segmentation
14
Introduction Image representation and description Usually follow the output of a segmentation stage Convert the data for a form suitable for subsequent computer processing
9
Introduction What to learn?
10
Introduction Image enhancement
11
Introduction Image restoration Denoising Geometric correction Radiometric correction
12
40
Application on other Domain
Image Processing First Used in Exploring Space
42
Exploring Space
43
Exploring Space
44
Earth Resource Reconnoitring
45
Remote Sensing Image
功能成象模式
SPECT单光子发射断层扫描像 PET正电子发射断层扫描像 fMRI功能磁共振成象 EEG脑电图 MEG脑电图 光学内源成象
39
Conception of Medical Image Processing
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition o f license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in thecaptured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up for the edge, and then analyzing and processing until the probably area of license plate is extracted.The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast ofthe image may require improvement.Commonly,too,coordinate transformations are needed to restore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image(or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to a label image.Now that we know the exact geometrical shape of the object,wecan extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now betreated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the same knowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon bythe spacecraft Voyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the publicdomain.Classical image processing techniques are routinely employed by graphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to ashape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.中文翻译图像处理不是一步就能完成的过程。