Color Imaging by using of DSP Implementation of Different Filters

合集下载

选一种科技产品写作文800字

选一种科技产品写作文800字

选一种科技产品写作文800字夜视技术是借助于光电成象器件实现夜间观察的一种光电技术。

夜视技术包括微光夜视和红外夜视两方面。

微光夜视技术又称像增强技术,是通过带像增强管的夜视镜,对夜天光照亮的微弱目标像进行增强,以供观察的光电成像技术。

微光夜视仪,是目前国外生产量和装备量最大和用途最广的夜视器材,可分为直接观察(如夜视观察仪、武器瞄准具、夜间驾驶仪、夜视眼镜)和间接观察(如微光电视)两种。

红外夜视技术分为主动红外夜视技术和被动红外夜视技术。

主动红外夜视技术是通过主动照射并利用目标反射红外源的红外光来实施观察的夜视技术,对应装备为主动红外夜视仪。

被动红外夜视技术是借助于目标自身发射的红外辐射来实现观察的红外技术,它根据目标与背景或目标各部分之间的温差或热辐射差来发现目标。

其装备为热像仪。

热成像仪具有不同于其他夜视仪的独特优点,如可在雾、雨、雪的天气下工作,作用距离远,能识别伪装和抗干扰等,已成国外夜视装备的发展重点,并将在一定成度上取代微光夜视仪。

夜视镜有两种,一种是微光夜视镜,一种是红外夜视镜。

微光夜视镜是把微弱的光放大了,而红外夜视镜是把红外线转为可见光。

红外夜视镜又分两种,一种是主动式的,一种是被动式的,主动式的就是夜视镜发出一束红外线,照到物体上再反射回来,相当于手电筒;被动式的则是把物体自身发出的红外线放大转化为可见光。

所以,在完全没有光的情况下,微光夜视镜是看不到东西的。

如果没有红外源的话(大多数能产生热量的东西都能成为红外源,如生物、车辆、火焰等),被动红外夜视镜也是看不到东西的。

而主动红外夜视镜在任何情况下都能看到东西。

不同的夜视镜有不同的适用场合,微光夜视镜适合野外有星光或月光的时候使用。

因为夜视镜只显示单色,而它的显示屏是绿色的(你可以注意到很多仪表的显示屏都是绿色的),所以你看到的是绿色的。

Night-vision technology is achieved by means of optical imaging device is a night-time observation of photovoltaic technology. Night-vision technology, including night vision and infra-red night vision in two ways. Night Vision technology, also known as the image intensifier technology, through the image intensifier tube with a night-vision goggles, to illuminate the night sky as to strengthen the weak targets for observation of electro-optical imaging technology. Night vision, which is the amount of foreign production and equipment, the largest and most versatile night vision equipment, can bedivided into direct observation (such as night-vision observation devices, weapon sights, night driving instrument, night vision glasses) and indirect observation (such as low-light television) two kinds. Infrared night-vision technology is divided into active and passive infrared night vision technology, infrared night-vision technology. Active infrared night vision technology isthe goal through active exposure to and use of infrared reflectance infrared light source to implement the observed night-vision technology, which corresponds to active-infrared night vision equipment. Passive infrared night vision technology is the goal of their own by means of infrared radiation emitted to achieve the observed infrared technology, which depending on the target or target and background temperature difference between the variousparts or thermal radiation to find the goal difference. Their equipment forthe thermal imager. Thermal imager with a different from the other night-vision of the unique advantages, such as in fog, rain, snow weather work, the role of distance, can identify the camouflage and anti-jamming and so on, has become the focus of development of foreign night-vision equipment, and acertain degree on the place into a night vision. Night-vision goggles, thereare two, one is low-light night-vision goggles, one is infrared night-vision goggles. Low-light night-vision goggles is to weak optical amplification,while the infrared night-vision goggles is to infrared light into visiblelight. Infrared night-vision goggles was divided into two kinds, one is active, and one is passive, and active is to send a bouquet of infrared night-vision goggles, shine on an object and then reflected back, which is equivalent flashlight; passive while is to issued its own objects into visible light,infra-red amplification. So, in the case of no light, low-light night-vision goggles can not see things. If there is no infrared source, then (most can produce things that can be an infrared heat source, such as biological, vehicles, fire, etc.), passive infrared night-vision goggles is unable to see things. And active infrared night-vision goggles under any circumstances can see things. A different application of night-vision goggles have different occasions, low-light night-vision goggles are suitable for field use when the 。

基于人类色彩感知的图像自动分割(IJIGSP-V1-N1-4)

基于人类色彩感知的图像自动分割(IJIGSP-V1-N1-4)

I.J. Image, Graphics and Signal Processing, 2009, 1, 25-32Published Online October 2009 in MECS (/)Automatic Image Segmentation Base on HumanColor PerceptionsYu Li-jie 1,21 College of Automation Beijing Union University, Beijing, Chinae-mail: zdhtlijie@Li De-sheng 2, Zhou Guan-ling12 Mechanical and Electronic Technology Research Centre Beijing University of Technology, Beijing, Chinae-mail: dsli@, zdhtguanling@Abstract—In this paper we propose a color image segmentation algorithm based on perceptual color vision model. First, the original image is divide into image blocks which are not overlapped; then, the mean and variance of every image back was calculated in CIE L*a*b* color space, and the image blocks were divided into homogeneous color blocks and texture blocks by the variance of it. The initial seed regions are automatically selected depending on calculating the homogeneous color blocks’ color difference in CIE L*a*b* color space and spatial information. The color contrast gradient of the texture blocks need to calculate and the edge information are stored for regional growing. The fuzzy region growing a lgorithm and color-edge detection to obtain a final segmentation map. The experimental segmentation results hold favorable consistency in terms of human perception, and confirm effectiveness of the algorithm.Index Terms—color image segmentation, visible color difference, region growing, human color perceptionI.I NTRODUCTIONImage segmentation refers to partitioning of an image into different regions that are homogeneous or “similar” in some image characteristics. It is usually the first task of any image analysis process module and thus, subsequent tasks rely strongly on the quality of segmentation[1]. In recent years, automatic image segmentation has become a prominent objective in image analysis and computer vision. Various techniques have been proposed in the literature where color, edges, and texture were used as properties for segmentation. Using these properties, images can be analyzed for use in several applications including video surveillance, image retrieval, medical imaging analysis, and object classification.On the outset, segmentation algorithms were implemented using grayscale information only (see [2] for a comprehensive survey). The advancement in color technology facilitated the achievement of meaningful segmentation of images as described in [3, 4]. The use of color information can significantly improve discrimination and recognition capability over gray-level methods. However, early procedures consisted of clustering pixels by utilizing only color similarity. Spatial locations and correlations of pixels were not taken into account yielding, fragmented regions throughout the image. Statistical methods, such as Classical Bayes decision theory, which are based on previous observation have also been quite popular[5,6]. However, these methods depend on global a priori knowledge about the image content and organization. Until recently, very little work had used underlying physical models of the color image formation process in developing color difference metrics.By regarding the image segmentation as a problem of partitioning pixels into different clusters according to their color similarity and spatial relation, we propose our color image segmentation method automatically. (1) Selects seeds region for image using block-based region growing and perceptual color vision model in the CIE L*a*b* color space; (2) Generates a final segmentation by utilizing an effective merging procedure using fuzzy algorithm and color-edge detection. Our procedure first partitions the original image into non-overlapping range blocks, calculate mean and variance of range blocks, sub-block in a color image will be grouped into different clusters, and each detected receive a label, with the same label is referred as a seed region grow into the higher seed regions areas. The seeds that have similar values of color and texture are consequently merged using fuzzy algorithm and color-edge detection to obtain a final segmentation map. The algorithm takes into account the fact that segmentation is a low-level procedure and as such, it should not require a large amount of computational complexity. Our algorithm is compiled in a MATLAB environment and tested over a large database (~100 images) of highly diverse images. The results indicate that our proposed methodology performs favorably against the currently available benchmarks.The remainder of the paper is organized as follows. In section Ⅱ, a review of the necessary background required to effectively implement our algorithm is presented. The proposed algorithm is described in Section Ⅲ. After that, application of the proposed26 Automatic Image Segmentation Base on Human Color Perceptionsalgorithm is discussed in section Ⅳ, and we draw our conclusion in the last section.II.BACKGROUNDA. color space conversionThe choice of the color space can be a very important decision which can dramatically influence the results of the segmentation. Many images are stored with RGB format, so it is easier for the subsequent of the RGB color space is used. The main disadvantage of the RGB color space in applications with natural images is a high correlation between its components: about 0.78 for r BR (cross correlation between the B and R channel), 0.98 for r RG and 0.94 for r GB [7]. It makes the choice of the RGB threshold very difficult. Another problem is the perceptual non-uniformity, such as the low correlation between the perceived difference of two colors and the Euclidian distance in the RGB space. In this paper, we choose CIE L *a *b * color space to work on due to its three major properties:(1) Separation of achromatic information from chromatic information; (2) uniform color space, and (3) similar to human visual perception [8].The definition of CIE L *a *b * is based on nonlinearly-compressed CIE XYZ color space coordinates, which is derived from RGB as follows equation (1):⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡B G R Z Y X 5943.50565.00000.00601.05907.40000.11302.17517.17689.2 (1)Based on this definition, L *a *b * is defined as follows:⎪⎪⎪⎪⎪⎪⎭⎪⎪⎪⎪⎪⎪⎬⎫⎪⎪⎪⎪⎪⎪⎩⎪⎪⎪⎪⎪⎪⎨⎧⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎟⎟⎠⎞⎜⎜⎝⎛−⎟⎟⎠⎞⎜⎜⎝⎛=⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡⎟⎟⎠⎞⎜⎜⎝⎛−⎟⎟⎠⎞⎜⎜⎝⎛=−⎟⎟⎠⎞⎜⎜⎝⎛=3131*3131*31*20050016116n n n n n Z Z Y Y b Y Y X X a Y Y L (2) Where⎪⎩⎪⎨⎧+>=otherwise q q if q q f ,11616787.7008856.0,)(31(3)()nn nZY X,, represents a reference white asdefined by a CIE standard illuminant, in this case, andare obtained by setting 100===B G R in equation(1) ⎟⎟⎠⎞⎜⎜⎝⎛⎭⎬⎫⎩⎨⎧∈n n n Z Z Y Y X X q ,,.B. Visible color differenceRecent researches indicate that most of color image segmentation algorithms are very sensitive to color difference calculation or color similarity measure [9]. It is safe to say that the accuracy of color difference calculation determines the performance of various color image segmentation approaches.The perceptual difference between any two colors can be ideally represented as the Euclidean distance between their coordinates in the CIE L *a *b * color space, and are considered perceptually distinguishable [10].In CIE L *a *b * color space, the L * is brightness degree in meter system, and a *, b * is color degree in meter system. Thus, the brightness degree difference between (1*1*1*,,b a L )and (2*2*2*,,b a L )is2*1**L L L −=∇, The difference of chroma is22*22*21*21**b a b a C +−+=∇ , and thedifference of hue is⎟⎟⎠⎞⎜⎜⎝⎛−⎟⎟⎠⎞⎜⎜⎝⎛=∇−−2*2*11*1*1*tan tan a b a b H . Overall Colour difference can be expressed by thegeometry distance of space:2*2*2**)H ()C ()L (HH C C L L S K S K S K E Δ+Δ+Δ=∇ (4) The parametric factors K L , K C and K H are set forcorrecting the variations contributed from experimental or background conditions. To compute the CIE94 color difference 94*E∇, the following parameter valves areset for *E ∇ in (4):*H *C L H C L 015.01S ,045.01S 1S 1,K K K C C +=+===== (5)0*>∇L , it shows the sample color is paler andhigher in brightness compared the standard color. Contrarily it is low.0*>∇C , it shows the sample color is partial redcompared the standard color. Contrarily it is partialgreen.0*>∇H , it shows the sample color is partialyellow compared the standard color. Contrarily it is partial blue.The unit of chromatism is NBS (the abbreviation of National Bureau of Standards, then it has a NBS chromatism unit when 1*=∇E ). A 24-bit color image contains up to 16 million colors. Most of the colors can not be differentiated by human beings, because humanAutomatic Image Segmentation Base on Human Color Perceptions 27eyes are relatively less sensitive to colors. Y.H.Gong’s [11]research show there is close relation between the human color perception and the NBS color distance, The NBS color distance is devised through a number of subjective color evaluation experiments to better approximate human color perception, which is shown in table.1. Moreover, the values of E*ab can be roughly classified into four different levels to reflect the degrees of color difference perceived by human. The color difference is hardly perceptible when E*ab is smaller than 3.0; is perceptible but still tolerable when E*ab between 3.0 and 6.0; and is usually not acceptable when E*ab is larger than 6.0. Hence, in this paper, we define a color difference to be “visible” if its E*ab value is larger than 6.0.T ABLE .1 THE CORRESPONDENCE BETWEEN THE HUMANCOLOR PERCEPTION AND THE NBS UNITSNBS unitHuman perception <3.0 Slightly different 3.0~6.0 Remarkably different But acceptable 6.0~12.0 very different 12.0~Different colorC. Color Gradient:boundary edge The color gradient is used for a gradual blend of color which can be considered as an even gradation from low to high values. Mathematically, the gradient of a two-variable function (here the image intensity function) is at each image point a 2D vector with the components given by the derivatives in the horizontal and vertical directions. At each image point, the gradient vector points in the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction.Wesolkowski compared several edge detectors in multiple color space, and he draw the conclusion that the performance of Sobel operator is superior to others [12]. So Sobel operator is used in this paper, Figure.1 shows the proposed boundary Sobel operator masks: (a) horizontal mask and (b) vertical mask. To construct a color contrast gradient, the eight neighbors of each pixel are searched to calculate the difference of lightness and the color-opponent dimensions with each neighbor pixel. Here the color difference replaces the color contrast gradient, and E ∇ denotes the color difference between two pixels, which is determined by equation (4). Let V x , V y denotes the gradient along x and y direction respectively. And the 3*3 mask is showed in Figure.2.-1 0 1 -2 0 2 -1 0 2-1 -2 -10 0 0 1 2 1(a)(b)Figure.1 Boundary Sobel operator masks:(a) horizontal mask, (b) vertical maska 1 a 2 a 3 a 4 (x ,y )a 5 a 6a 7a 8Figure.2 3×3 neighborhood region()()()()()382716321876,,2,22a a E a a E a a E a a a a a a v x ∇+∇+∇=++−++=(6)()()()()()684513641853,,2,22a a E a a E a a E a a a a a a v y ∇+∇+∇=++−++=(7)Then the magnitude of the Sobel operator can becalculated as follows:22y x v v G += (8)As Based upon this definition of color contrast gradient, it is possible to build up the following color edge detection application.The conventional region-edge integrating algorithm stores the edge information on the pixel itself. When there are different regions on both sides of a certain edge, however, it is necessary to decide which region the edge pixel itself belongs to, and this makes the algorithm cumbersome. In this paper, we propose to store the edge information not on a pixel itself but on a boundary between pixels [13,14]. We define the boundary pixel as the pixel that virtually exists between two real pixels and that has an infinitesimal width. The advantages of using the boundary pixels to keep the edge information, which we call the boundary edge, are as follows: (1) the control of region growth becomes easier and (2) the whole algorithm becomes simpler. In this paper set the threshold T tofilter the low gradient and only store the boundary pixel with high gradient.III.PROPOSED ALGORITHMFigure.3 has shown the architecture of the proposed algorithm. Firstly, an input color image is conversed from RGB color space into CIE L *a *b * color space, and choose a block size n n ×. The block size should be just small enough that the eye will average color over the block. Divide the input image into small and non-overlapping rectangular block, calculate mean and variance of range blocks, we need to construct a suitable method to display the data in order to investigate relationships of the regions. The data can be modeled using an (N, P) matrix, where N is the total number of blocks in the image, and P is the total number of variables that contain information about each block. The range blocks can classify two different blocks, we apply region growing and visible color difference to obtain initial seed regions. Thirdly, the color sobel detector is28 Automatic Image Segmentation Base on Human Color Perceptionsused to detect the texture blocks, using the acquired L*a*b* data, the magnitude of the gradient ()j i G,of the color image field is calculated. Fourthly, the initial seed regions are hierarchically merged based on the color spatial and adjacent information; the segmented color image is outputted at last. The rest parts of this section will discuss the details.A.C LASSIFICATION WITH V ARIANCE AND M EANIn this paper, the block variance and mean are the methods to classify. Blocks variance is usually used to classify the simplicity or complexity for each block. The average color value () of block is defined as:∑∑===ninjijixnnx11*1(8) The variance variance value()vbvavl,,of block is defined as follows:()∑∑==−=ninjiijxxxnn11*1δ (9)Where n is the size of the block and ix(()***,,b a Lxi∈) is the pixel value of the range blocks.The image blocks are classified into two different groups according to variance difference: one is monotone color block; the other is texture and edge block there is a variety of color. The variance value of monotone color block is very small, same time texture block is very big because of the edge, the variance value can set the threshold T to distinguish between two different types of image block, the value 0.05 is selected as the threshold based on our experiments, if a higher value is used, a smaller number of pixels will be classified as homogenous color blocks and some objects may be missed; oppositely, a higher number of pixels will be classified as texture block and different regions may be connected. Figure.4 has show the image block classification sketch map.○○○×○×○×○Figure. 4 image block classification sketch mapB.I NITIAL S EED R EGIONS G ENERATATIONAfter the segmentation of block partitioning, there will be a problem of over-segmentation. Those small over segmented regions should be merged into a large region to get the initial seed regions, to satisfy the need of image analysis and other follow-up treatment. The initial seed regions are generated by detecting all monotone color block in the image. In order to prevent multiple seed generation within monotone and connected image blocks, the first requirement is to enforce that seeds region be larger than 0.25% of the image. The reason of this rule is that the tiny region often is blotted region. The second requirement is toenforce the color different distanceijE∇>6.0 any of seed regions. It is briefly described as follows:1) Select monotone color block within the image;2) Detect adjacent monotone color block, calculate the color different distance between candidate and itsnearest neighbor seed region, ifijE∇<6.0, merge regions to existent seed region, else add a new seed region.3)Detect non-adjacent monotone color blocks, calculate the color different distance between candidateand its nearest neighbor seed region, ifijE∇<6.0, mergeAutomatic Image Segmentation Base on Human Color Perceptions 29regions to existent seed region, else add a new seed region.4) Scan the runs, and assign preliminary labels and recording label equivalences in a local equivalence table. After previously processed, there are totally m different color seed regions {}m S S S S L 321,,, they corresponding n homogeneous color regions, a color set may remark one or more color blocks that aren’t adjacent in spatial location, label the homogeneous color blocks with its own sequence number to declare the initial seed regions, record as {}n B B B L ,2,1.C. F UZZY REGION -G ROWING A LGORITHMThe number and the interior of the regions are defined after marker extraction. However a lot of undecided pixels are not assigned to any region. Most of them are located around the contours of the regions. Assigning these pixels to a given region can be considered as decision process that precisely defines the partition. The segmentation procedure in the present investigation is a fuzzy region-growing algorithm that is based on a fuzzy rule. Our final objective is to split an original image I into a number of homogeneous but disjoint regions R j by one-pixel width contours:k j R R R I k j nj j ≠===I U φ,1(10)The region growing is essentially a grouping procedure that groups pixels or sub regions into larger regions in which the homogeneity criterion holds. Sarting from a single seed region, a segmented region is created by merging the neighboring pixels or the adjacent regions around a current pixel. The operations are repeatedly performed until there is no pixel that does not belong to a certain region.Since our strategy in segmenting natural color images is an effective combination of an accurate segmentation by the color difference and the color gradient, and a rough segmentation by the local fractal dimension feature, it is inevitable to employ a technique for information integration. We adopt fuzzy rules to integrate the different features. We use the following criteria where each fuzzy rule has a corresponding membership function.[Rule 1] the first image feature is the color difference between the average value g ave (R k ) of a region and the value of a pixel g(i,j) under investigation:()()j i g R g E k ave ,−=∇ (11)The corresponding fuzzy rule for fuzzy set SMALL is R1: If color difference is small then probably merge else probably not merge.[Rule 2] The second intensity feature is the color contrast gradient, or the value of the boundary edges between the pixel and its adjacent region. A new pixel may be merged into a region if the gradient is low. If the gradient is high, the pixel will not be merged. Weemploy the boundary Sobel operator to calculate the color gradient and to achieve an accurate segmentation at the strong-edge regions. The boundary edges effectively protect the unnecessary growth of regions around the edges. The fuzzy rule for fuzzy set LOW becomes:R2: If gradient is low then probably merge elseFigure.4 shows the two membership functions corresponding to each fuzzy rule. After the fuzzification by the above two rules, min-max inference takes place using the fuzzy sets shown in Figure.5. Then the conventional centroid defuzzification method is applied. A pixel is applied. A pixel is really merged when the homogeneity criterion is satisfied to an extent of 50% after defuzzification.The fuzzy region growing processing is briefly described as follows:1) For one of the initial seed regions {}m B B B L ,2,1, first from the upper left corner of the adjacent texture blocks, seeked the pixels that has thesame color and features, and merged them into the seek region. When the adjacent sub-block has been accepted, the seed region grows as a new seed region, updated the new seed region’s features. Repeated the process until the near of sub-blocks have not acceptable, the suspension of the process of growth.2) Repeat (1) until all seed regions growing are accomplished.D. M ERGE S MALL R EGIONSThough over-segmentation problem is diminished through above procedures, in most case a further process of eliminating the small regions is still required to produce the more clear objects or homogenous regions. In our algorithm, those small regions occur often along the contour of objects or inside a complex region due to compound texture. Thus, the merger of the region aimed at the effective removal of noise and image detail will be merged into the connected area.The merged processing is given as follows.1) The tiny region is regarded as bloted region andto be filtered;30 Automatic Image Segmentation Base on Human Color Perceptions2)The other fragmented region is merged base onthe spatial location relation with the adjacentregion.The relation of spatial location between objects isgiven as follows. If an object were entirely surroundedby another object, or at least three directions, as shownin figure.6 and figure.7, that belongs to an interior object,we merged it into peripheral region. If the target is notsurrounded by the other object, as shown in Figure.8, itwas regarded as a border and merged with theconditions of edge.1 1 1 1 1 1 12 1 1 1 1 2 2 1 1 2 2 1 1 1 1 1 1 1 Figure.6 An object was entirelysurrounded byanother object1 1 1 1 21 12 1 21 12 2 21 2 2 1 21 1 1 1 2Figure. 7 An objectwas surrounded byanother object fromthree directions1 1 3331 1 2331122312 2331 1 133Figure.8 A smallobject located in twoobjectsFinally, if the number of pixels in a region is lowerthan a given smallest-region threshold that is determinedby the size of image, we will merge this region into itsneighborhood with the smallest color difference.This rule is the last step in our segmentationalgorithm. Based on our experiments, we select 1/150 ofthe total number of pixels of the given color image asthe threshold. This procedure repeats until no region hassize less than the threshold. Finally, a segmented imageis produced.IV.E XPERIMENT SIMULATION AND RESULT ANALYSISTo verify the performance of the proposedsegmentation algorithm, we experiment with colorremote sensing images and natural images. We showresults of some of these experiments. Figure.9 andFigure.10 show some typical results of our color imagesegmentation algorithm.In fact, we can compare our algorithm andtraditional SRG algorithm here. Our method use regionrather than pixel as initial seeds. In this sense, high-levelknowledge of the image partitions can be exploitedthrough the choice of the seeds much better becauseregion has more information compared to pixels.(a) (b)(c) (d)(e)Figure 9. fabric image segmentation experimental results(a) Original image (b) color contrast gradient(c) Green object (d) Purple object(a) (b)(c) (d)Figure 10. capsicum image segmentation experimental results(a) Original image (b) color contrast gradient(c) Red capsicum (d) Green capsicum(a) (b)Automatic Image Segmentation Base on Human Color Perceptions 31(c) (d)(e)Figure 11. capsicum image segmentation experimental results(a) Original image (b) color contrast gradient(c) Red capsicum (d) yellow capsicum (e)green capsicumV.C ONCLUSIONThis work presents a computationally efficientmethod designed for automatic segmentation of colorimages with varied complexities. Firstly, the originalimage is divide into rectangular image blocks which arenot overlapped; then, the mean and variance of eachimage black was calculated in CIE L*a*b*color space,and the image blocks were divided into homogeneouscolor blocks and texture blocks by the variance of it.The Initial seed regions are automatically selecteddepending on calculating the homogeneous colorblocks’ color difference in CIE L*a*b*color space andspatial and adjacent information. The color contrastgradient of the texture blocks need to calculate and theinformation of boundary pixels are storage. Finally, theregion growing and merging algorithm was to use andachieve the segmentation result.The meaningful experiment results of color imagesegmentation hold favorable consistency in terms ofhuman perception and satisfy the content-based imageretrieval and recognition process. There are mainly twodisadvantages in our algorithm. First, although using thefixed threshold values can produce reasonably goodresults, it may not generate the best results for all theimages. Second, when an image is highly color textured(i.e. there are numerous tiny objects with mixed colors),our algorithm may fail to obtain satisfied results becausethe mean value and variance could not represent theproperty of the region well. How to combine otherproperties such as texture into the algorithm to improvesegmentation performance is the point of our furtherresearch.A CKNOWLEDGMENTThis paper is fully supported by theNational Science and Technology InfrastructureProgram of China (No.13001790200701).R EFERENCES[1] ZHANG YU-JIN. Image project(media), image analysis.Beijing. Tsinghua University Press, 2005[2]H. Cheng, X. Jiang, Y. Sun and J. Wang, Color imagesegmentation: Advances & prospects, Pat. Rec., Vol. 34, No.12, pp. 2259-2281, Dec. 2001.[3] J. Wu, H. Yan, and A. Chalmers, “Color imagesegmentation using fuzzy clustering and supervised learning”,Journal of Elec. Imag., Vol. 3, No. 4, pp. 397–403, Oct. 1994.[4] P. Schmid, Segmentation of digitized dermatoscopicimages by two-dimensional color clustering,IEEE Trans. onMed. Image., Vol. 18, No.2, pp. 164–171, Feb. 1999.[5] Daily, M.J., J.G. Harris, K.E. Olin, K. Reiser, D.Y. Tseng,and F.M. Vilnrotter, Knowledge-based Vision TechniquesAnnual Technical Report. U.S. Army ETL, Fort Belvoir, VA,October, 1987.[6]Healey, G. and T. Binford, The Role and Use of Color in aGeneral Vision System. Proc. of the DARPA IU Workshop,Los Angeles, CA, pp. 599-613, February, 1987.[7]GONG Sheng-rong, Digital image processing and analysis.Beijing. Tsinghua Unversity Press, 2005[8]G.Wyszecki and W.Stiles, Color Science: Concepts andMetheds, Quantitative Data and Formulae, 2nd ed. NewYork: Wiley, 1982.[9]H.D. Cheng, X.H. Jiang, Y. Sun, et al. “Color imagesegmentation: advances and prospects”. Pattern Recognition,2001, pp. 2259- 2281[10] Qi Yonghong and Zhou Shshenqi, “Review on Uniformcolor space and Color Difference Formula”, Print World,2003.9:16-19.[11] Gong Y.H, Proietti G. Image indexing and retrievalbased on human perceptual color clustering. The internationalconference on computer vision, Munbai, 1998[12]S. Wesolkowski, M.E. Jernigan, R.D. Dony, “Comparisonof color image edge detectors in multiple color space”. ICIP-2000, pp. 796 – 799[13]J. Maeda, V.V.Anh,T.Ishizaka and y. suzuki, “ Integrationof local fractal dimension and boundary edge in segmentingnatural images” , Proc. IEEE Int. Conf. on Image Processing,vol.I, pp.845-848, 1996.[14]J. Maeda, T. Lizawa, T. Ishizaka, C. Ishikawa and Y.Suzuki, “Segmentation diffusion and linking of boundaryedges”, Pattern Recognition, vol.31(12), pp.1993-1999, 1998.[15] Ye Qixiang, Gao Wen, Wang Weiqiang, Hang Tiejun.“A Color Image Segmentation Algorithm by Using Color andSpatial Information”. Journal of Software. 2004, 15(4):522-530.[16] Hsin-Chia Chen, Sheng-Jyh Wang. “The use of visiblecolor difference in the quantitative evaluation of color imagesegmentation”. Vision, image and signal processing, IEEEproceedings. 2006, Vol.153, pp.598 - 609.YU Li-jie, female, is a Lecturer at the College ofAutomation Beijing Union University. She is currentlyworking toward the Ph.D. degree in the Mechanical &Electronic Technology Research Center majoring in Test &Control Technology in the Beijing University of Technology.Her research interests include computer vision, digital imageprocessing and Software Development. Her teaching interestsinclude digital image processing, Object-Orientedprogramming, and web database technology.。

基于DSP的虹膜识别系统的实现

基于DSP的虹膜识别系统的实现

总第254期2010年第12期计算机与数字工程C om put er8。

D i gi t al Engi nee r i ngV01.38N o.12112基于D SP的虹膜识别系统的实现。

柯辉尉宇(武汉科技大学信息科学与工程学院武汉430081)摘要提出基于T M S320C6713b浮点型D SP的虹膜识别硬件系统设计,以模块化的方式详细介绍了各个部分的结构、功能、工作原理等设计内容。

对归一化的虹膜图像采用2D-G abor滤波器的虹膜算法实现对虹膜特征的提取,通过比较海明码距完成特征匹配。

实验结果表明,该系统能高效、稳定的工作,具有较高的识别率,效果良好。

关键词虹膜识别;D SP2D-G abo r滤波器;特征提取中图分类号TP391.41I m pl em ent at i on of I r i s R ecogni t i on Sys t em B as ed on D S PK e H U i W ei Y u(Col l ege O f I nf or m a t i on Sc i enc e and Engi neer i ng,W uhan U ni ver s i t y of S c i e nce and Techno l ogy,W uhan430081)A bs t ra ct A n i ri s r ec ogni t i on s y s t em i s pr es ent ed bas ed o n f l oat i ng-poi nt D SP T M S3200C6713b.E v er y s ys t em m od ul e’S con st i t ut i on.f unct i on and pr i nci pl e ar e di scusse d i n det ai l.I n or der t O ext r act i ri s ch ar ac t er,an i r i s r ec ogni t i on al—gor i t hm ba se d2D-G abor f i l t e r f or nor m a l i za t i on i m age s i s proposed.Thr ough c om pa ri ng t he ham m i ng di st a nce t O r eal i ze fea—t u r e m at ch.E xper i m ent r esul t s s how t hat t h i s s ys t em c an r un ef f i cient l y and st a bi l i t y,w hi ch has hi ghe r r ec ogni t i on ac cur ac y and go od ef fect.K eyW or ds i ri s r ec o gni t i o n,D S P,2D-G abo r f i l t er,f eat ur e ext r act i onC l as s N um be r TP391.411己I言_L J1日随着社会经济的快速发展,身份识别越来越受到重视,特别是生物识别技术受到了广大科技人员的青睐。

image processing

image processing

image processingImage ProcessingIntroductionImage processing refers to the techniques and methods used to manipulate, analyze, and enhance digital images. It plays a crucial role in various fields, including computer vision, medical imaging, surveillance, and entertainment. This document will discuss the basics of image processing, its applications, and some widely used algorithms and techniques.Basics of Image ProcessingDigital images are composed of a grid of pixels, with each pixel containing information about its color and intensity. Image processing involves applying mathematical operations to these pixels to achieve specific objectives. The fundamental operations include image enhancement, restoration, and compression.1. Image EnhancementImage enhancement techniques aim to improve the visual quality of images by eliminating noise, adjusting contrast, and improving sharpness. Common techniques include histogram equalization, contrast stretching, and spatial filtering. Histogram equalization redistributes the pixel intensities to enhance the overall contrast. Contrast stretching expands the dynamic range of the image by stretching the intensities of the lower and upper ends. Spatial filtering uses a mask or kernel to perform operations such as blurring, sharpening, and edge detection.2. Image RestorationImage restoration techniques focus on removing or reducing the effects of noise, blurring, and other impairments in images. Inverse filtering, Wiener filtering, and blind deconvolution are popular methods used for image restoration. Inverse filtering attempts to restore the original image by applying the inverse of the degradation function. Wiener filtering estimates the original image based on statistical properties of the degraded image and the noise. Blind deconvolution aims to estimate both the original image and the blurring function without any prior knowledge.3. Image CompressionImage compression techniques aim to reduce the storage requirements of digital images without significant loss of quality. Lossless and lossy compression are two commonly used methods. Lossless compression algorithms such as Run-Length Encoding (RLE) and Huffman Coding achieve compression without losing any data. Lossy compression algorithms like JPEG (Joint Photographic Experts Group) selectively discard image data to achieve higher compression ratios. However, the discarded information may result in some loss of image quality.Applications of Image ProcessingImage processing has a wide range of applications in various fields. Here are some notable examples:1. Medical ImagingIn medical imaging, image processing techniques are used for tasks such as image reconstruction, segmentation, and feature extraction. It facilitates the analysis and interpretationof medical images, aiding in diagnosis and treatment planning. Techniques like computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound heavily rely on image processing algorithms to create detailed visual representations.2. Computer VisionComputer vision involves the development of algorithms and techniques to enable computers to gain visual understanding from digital images or videos. Image processing is a fundamental component of computer vision systems, helping in tasks such as object detection, tracking, and recognition. Applications of computer vision include robotics, autonomous vehicles, surveillance systems, and augmented reality.3. Remote SensingRemote sensing refers to the collection of information about an object or an area without making direct contact with it. Image processing techniques are extensively used in analyzing satellite and aerial imagery for applications such as environmental monitoring, urban planning, agriculture, anddisaster management. It enables the extraction of valuable information from these images to make informed decisions.Widely Used Algorithms in Image ProcessingSeveral algorithms and techniques are widely used in image processing. Here are a few notable ones:1. Convolutional Neural Networks (CNN)CNNs have revolutionized the field of computer vision by achieving state-of-the-art performance in various tasks. They are deep learning models capable of automatically learning hierarchical representations from images. CNNs have been successfully applied to image classification, object detection, and semantic segmentation, among other tasks.2. Edge DetectionEdge detection algorithms aim to identify the boundaries between different regions or objects in an image. The popular Canny edge detection algorithm uses multi-stage processing to accurately detect edges while reducing false positives. Itinvolves steps like smoothing, gradient calculation, non-maximum suppression, and hysteresis thresholding.3. Image SegmentationImage segmentation divides an image into meaningful regions or objects. The watershed algorithm is commonly used for this purpose. It treats the intensity of the image as a topographic surface and simulates flooding to separate the regions. Other techniques like thresholding, region-growing, and clustering are also used for image segmentation.ConclusionImage processing is a crucial field that enables the manipulation, analysis, and enhancement of digital images. It has applications in various domains ranging from medicine to computer vision and remote sensing. By utilizing algorithms and techniques like image enhancement, restoration, and compression, image processing opens up new possibilities for visual understanding and decision-making. As technology continues to evolve, image processing is expected to play an even more significant role in shaping future advancements.。

(完整版)医学影像专业英语

(完整版)医学影像专业英语

(1)To prospectively evaluate the effect of heart rate, heart rate variability, and calcification dual-source computed tomography (CT) image quality and to prospectively assess diagnostic accuracy of dual-source CT for coronary artery stenosis. by using invasive coronary angiography as the reference standard.前瞻性评价心率、心率变异性及钙化双源计算机断层扫描成像质量的影响及对冠状动脉狭窄的双源性冠状动脉狭窄诊断的准确性评价。

以侵入性冠状动脉造影为参照标准。

(2)Chest radiography plays an essential role in the diagnosis of thoracic disease and is the most frequently performed radiologic examination in the United States. Since the discovery of X rays more than a century ago, advances in technology have yieled numerous improvements in thoracic imaging. Evolutionary progress in film-based imaging has led to the development of excellent screen-film systems specifically designed for chest radiography.胸部X线摄影中起着至关重要的作用在胸部疾病的诊断,是最常用的影像学检查在美国。

高斯勒让德求积公式matlab_光学相干层析成像oct

高斯勒让德求积公式matlab_光学相干层析成像oct

高斯勒让德求积公式matlab_光学相干层析成像oct英文版Gauss-Legendre Quadrature Formula in MATLAB for Optical Coherence Tomography (OCT)Optical Coherence Tomography (OCT) is a non-invasive imaging technique that has revolutionized the field of medical diagnostics. By using low-coherence interferometry, OCT can produce high-resolution, cross-sectional images of biological tissues. One of the key components in OCT image reconstruction is the Gauss-Legendre quadrature formula, which is used to calculate the integrals involved in the imaging process.The Gauss-Legendre quadrature formula is a numerical method for approximating integrals by evaluating a weighted sum of function values at specific points. In the context of OCT, this formula is essential for accurately reconstructing the optical properties of the tissue being imaged. By using the Gauss-Legendre quadrature formula, researchers and clinicians can obtain precise and reliable OCT images that can aid in the diagnosis and treatment of various medical conditions.In this article, we will discuss how to implement the Gauss-Legendre quadrature formula in MATLAB for OCT image reconstruction. MATLAB is a powerful tool for scientific computing and is widely used in the field of medical imaging. By utilizing MATLAB's built-in functions for numerical integration, we can easily calculate the integrals required for OCT image reconstruction.To begin, we need to define the function that represents the optical properties of the tissue being imaged. This function will depend on the specific characteristics of the tissue, such as its scattering and absorption coefficients. Once we have defined the function, we can use the Gauss-Legendre quadrature formula to calculate the integrals of this function over the depth of the tissue.Next, we need to determine the number of quadrature points to use in the Gauss-Legendre formula. The accuracy of the integral approximation will depend on the number of points used, with a higher number of points leading to a more accurate result. In practice, a sufficient number of points can be determined through trial and error, ensuring that the OCT image reconstruction is both accurate and efficient.After determining the number of quadrature points, we can proceed to calculate the weights and nodes for the Gauss-Legendre quadrature formula. MATLAB provides functions for generating these weights and nodes, making it easy to implement the formula in our OCT image reconstruction algorithm. By inputting the weights, nodes, and function values into the formula, we can accurately calculate the integral over the depth of the tissue.In conclusion, the Gauss-Legendre quadrature formula is an essential tool for OCT image reconstruction, allowing researchers and clinicians to obtain precise and reliable images of biological tissues. By implementing this formula in MATLAB, we can streamline the image reconstruction process and improve the accuracy of our results. With further advancements in OCT technology and image processing algorithms, the potential applications of this imaging technique are limitless.中文翻译高斯勒让德求积公式在MATLAB中用于光学相干层析成像(OCT)光学相干层析成像(OCT)是一种非侵入性成像技术,已经彻底改变了医学诊断领域。

Color Image Segmentation Using Competitive and Cooperative Learning Approach

Color Image Segmentation Using Competitive and Cooperative Learning Approach

Abstract—Color image segmentation can be considered as a cluster procedure in feature space. k-means and its adaptive version, i.e. competitive learning approach are powerful tools for data clustering. But k-means and competitive learning suffer from several drawbacks such as dead-unit problem and need to pre-specify number of cluster. In this paper, we will explore to use competitive and cooperative learning approach to perform color image segmentation. In competitive and cooperative learning approach, seed points not only compete each other, but also the winner will dynamically select several nearest competitors to form a cooperative team to adapt to the input together, finally it can automatically select the correct number of cluster and avoid the dead-units problem. E xperimental results show that CCL can obtain better segmentation result. Keywords—Color image segmentation, competitive learning, cluster, k-means algorithm, competitive and cooperative learning.I.I NTRODUCTIONmage segmentation is a process of grouping an image into homogenous regions with respect to one or more characteristics. It is the first step in image analysis and pattern recognition. In the past decades, many attentions had been paid to monochrome image segmentation and many algorithms had been proposed in literatures [1]. Compared to monochrome image, a color image provides, in addition to intensity, additional information in the image. In fact, human beings intuitively feel that color is an important part of their visual experience and color is useful or even necessary for powerful processing in computer vision. Also the acquisition and processing hardware for color image become more and more available for dealing with the problem of computation complexity caused by the high-dimensional color space. Hence, color image processing becomes increasingly prevalent nowadays.In this paper, we focus on color image segmentation. In literatures, many methods are available for color image segmentation [2]. Among these methods, one frequently used approach is to use a clustering procedure on feature space Manuscript received December 15, 2004. This work was supported by National Nature Foundation of China (No.60174010)Yinggan Tang is with the Institute of E lectrical E ngineering, Yanshang University, Qinhuangdao, China (e-mail: ygtang@ ).Xinping Guan is with the Institute of E lectrical E ngineering, Yanshang University, Qinhuangdao, China (corresponding author, tel: 86-335-807-4881, fax: 0086-335-807-7929, e-mail: xpguan@) [3]-[5]. k-mean algorithm and its adaptive version, i.e. competitive learning, is a popular method for this purpose. For example, Toshio and Arbib [3] proposed a color image segmentation method using competitive learning based on the least sum of squares criterion. Although k-means and competitive learning algorithm can successfully accomplish data clustering in some situations, it suffers from several drawbacks pointed out in [6]-[7]. First, there is the dead-unit problem. That is, if some units are initialized far away from the input data set in comparison with the other units, they then immediately become the dead unit without any winning chance in the forthcoming competitive learning process. Second, it needs to pre-determine the cluster number. When pre-determined cluster number equals to the true cluster number, k-means algorithm can correctly find out the cluster center. Many researchers have done much work to circumvent the above problems. An extension of k-means named Frequency Sensitive Competitive Learning (FSCL) was proposed [8]. In FSCL algorithm, the winning chance of a seed point is penalized along with the increase of past winning frequency, and vice versa. FSCL algorithm can successfully assign one or more seed points to each cluster without dead-unit problem. But its cluster performance decreases when cluster number is incorrectly selected in advance. Another algorithm developed by Xu [7] is Rival Penalized Competitive Learning (RPCL). In RPCL, for each input, not only the winner of the seed points is updated to adapt to the input, but also its rival is de-learned by a smaller constant learning rate called de-learning rate. RPCL can select the correct number automatically but its performance is sensitive to the selection of de-learning rate. If de-learning rate is selected too small or too large, cluster result is not so satisfactory. Recently, Cheung [9] proposed a new competitive learning approach, called Competitive and Cooperative Learning approach (CCL). In CCL, seed point not only compete each other for updating to adapt to an input each time, but also the winner will dynamically select several nearest competitors to form a cooperative team to adapt to the input together. As a whole, the seed points locating in the same cluster will have more opportunity to cooperate each other than compete to achieve learning task. E xperiment studies have demonstrated the outstanding performance of CCL.In this paper, we will explore to use CCL approach to perform color image segmentation. The rest of this paper is organized as follows. In Section II, we first briefly overview FSCL and CCL algorithm and then color image segmentationColor Image Segmentation Using Competitive and Cooperative Learning ApproachYinggan Tang, Xinping GuanIalgorithm is present using CCL. Experimental results is given in Section III. Finally, Section IV draws a conclusion.II.C OLOR I MAGE S EGMENTATION U SING CCLA.Brief Overview of CCLSuppose there are N data points 12,,,N x x x L , which will be partitioned into k clusters and seed points are 12,,,k w w w L .To this end, an adaptive version of k-means algorithm, i.e. competitive learning can be used. As mentioned above, competitive learning has the problem of dead-unit. To deal with this problem, Ahalt et al.[8] proposed a frequency sensitive competitive learning (FSCL) approach, in which apart from considering the distance of i w s to the input, an implicit penalty is also given to those points that have high relative winning frequency in the past competitions. FSCL algorithm consists of the following steps:Step 1: Pre-specify the number of clusters and initialize the seed points {}1kj j w =Step 2: Given an input i x , calculate the indicator function()i I j x by()211 if arg min0 otherwise r k r i r i j n x w I j x ££ì=-ïï=íïïî(1)where r n is the winning times of r w in the past.Step 3: Update the winning seed point c w , i.e.()1i I j x =,and c n by()new old old c c i c w w x w h =+- (2)newold 1c c n n =+ (3) respectively. FSCL can overcome the dead-unit problemsuccessfully. However, it needs to pre-assign the number of cluster. If k is not equal to the true k *, FSCL will lead to an incorrect clustering result. RPCL algorithm, proposed by Xu [7], can select the correct number automatically but its performance is sensitive to the selection of de-learning rate. Recently, Cheung [9] proposed a new semi-competitive learning algorithm named Competitive and Cooperative Learning (CCL). The basic idea of CCL is that k seed points not only compete each other for updating to adapt to an input each time, but also the winner will dynamically select several nearest competitors to form a cooperative team to adapt to the input together. This competitive and cooperative mechanism can automatically merge those extra seed points, meanwhile making the seed points gradually converge to the corresponding cluster centers. Consequently, CCL can perform a robust clustering without prior knowing the exact cluster number so long as the number of seed points is not less than the true one. In the following, k *denotes the true number of cluster in input space. CCL can be described as follow:Step1: Pre-specify the number k of clusters with k k *³,initialize the seed points {}1kj j w =, and set 1j n =with 1,2,,j k=L Step 2: Give an input i x , calculates ()i I jx using (1).Step 3: Let the cooperative set {}c C w =. We then span C by{}j c j c i C C w w w w x =È-£-That is, all of those seed points fallen into the circle centered at c w with the radius c i w x - are the winners as well as c w ,but the others outside the circle are not. Step 4: Update all members in C by()new old old u u i u w w x w h =+-where u w C Î. Furthermore, we here only update c n by (4)Repeat Step 2 and Step 4 until all seed points converge. CCL makes each extra seed point finally locate at one of cluster center and we can determine the exact cluster number by counting the number of those points stayed at different positions.B.Color Image Segmentation Algorithm Using CCL Color image segmentation can be considered as a colorclustering procedure in a certain feature space. In the past, many color features had been developed, such as RGB color space, I 1I 2I 3 color space developed by Ohta [10], HIS color space, et al. For simplicity, we use RGB color space in this paper. Let I be a color image of size 12M M ´ and let12N M M =´be the total pixel number of the image. Theset of image pixels denotes {}1N i i D x ==, where(),,R G Bi i i i x x x x = is a 13´vector representing a colorpixel and X i x is a scalar observed on the X plane of an image.Color image segmentation using CCL can be summarized as follows:Step 1: Input a image and pre-specify the number of cluster k such that k k *³, where k *is the true number of clusters in the input image.Step 2: Randomly initialize seed points {}1kj j m =, where (),,R G B j j j j m m m m D =Î.Step 3: Picks an image pixel i x randomly from D and calculates indicator function ()i I jx using (1).Step 4: Let the cooperative set {}c C m =. We then span C by{}j c j c i C C m m m m x =È-£-to form a cooperative team.Step 5: Update all members in C by()new old oldu u i u m m x m h =+-new old1c c n n =+whereu m C Î.Repeat Step 3 and Step5 until all seed points converge. For each cluster in segmented image, we use converged seed points to represent each cluster. The reason of using CCL to perform color image segmentation is that CCL do not need to specify the precise number of clusters in image in advance and the segmentation result is always satisfactory. In next section, experimental results will be given to verify the performance of CCL in color image segmentation.III.E XPERIMENT R ESULTSTo demonstrate the segmentation performance of CCL, we do some experiments. The experiment results using CCL are compared with those using FSCL and CL. In all experiments, seeds points are selected randomly in input images and the learning rate is 0.05h =. The experiments results are shown in Fig.1-Fig.3. In all figures, (a) is original image, (b), (c) and (d) is the segmentation results using CL, FSCL and CCL respectively. In Fig.1, original image is House. For each approach, seed point number is initially set as 10 and the learning epochs are 20. It can be seen from the results that the wall under the lower roof is more homogenous using CCL than using CL and FSCL and the shadow effect can removed to certain extend using CCL than CL and FSCL. Finally, CCLalgorithm merged three extra seed points as shown in Table I.(a) The original House Image (b) CL(c). FSCL (d) CCLFig.1. Segmentation results of House image. (a) The original House image.(b) The result using competitive learning (c) The result using frequency sensitivecompetitive learning (d) The result using competitive and cooperative learning .(a) Original tree image (b) CL(c) FSCL (d) CCLFig.2. Segmentation results of tree image. (a) The original tree image.(b) The result using competitive learning (c) The result using frequency sensitivecompetitive learning (d) The result using competitive and cooperative learning .(a) Original pepper image (b) CL(c) FSCL (d) CCLFig.3. Segmentation results of pepper image. (a) The original Pepper image.(b) The result using competitive learning (c) The result using frequency sensitive competitive learning (d) The result using competitive and cooperative learning .In Fig.2, original image is tree. In this experiment, seed points are initially set to 10 and learning epochs is 15. From the segmentation result we can see that grass ground is more homogenous using CCL than using than using CL and FSCL. CCL merges two extra seed points and finally select eights seed points. In Fig.3, original image is pepper and seed points are initially set to 8, learning epochs are 15. Due to lights non-uniformity, CL and FSCL algorithm make improper segmentation especially in the regions of strong lights. CCL can remove this effect in a large extend. Finally, all the converged seed points using CCL are shown in Table I. We can see that CCL can merges extra seed point and select correct seed points. E xperiment results are better using CCL than using CL and FSCL.IV.C ONCLUSIONColor image segmentation is considered as a cluster procedure in color space. Competitive and cooperative learning algorithm is used to achieve this end due to it can select correctnumber of clusters of images. Compared with segmentation results, one can draw a conclusion that competitive and cooperative algorithm is a more efficient clustering approach for color segmentation than CL and FSCL.TABLE IC ONVERGED SEED POINTS OF IMAGE H OUSE,T REE AND P EPPERHouse image Tree image Pepper imageR G B R G B R G B 158.49 196.96 220.65 85.525 60.545 90.72 131.05174.3484.652117.13 87.639 96.825 175.51 197.73 207.5 185.87211.24167.44 158.49 196.96 220.65 150.87 141.84 138.77 180.2945.39 37.432210.32 219.95 216.19 2.9703e-53 9.0361e-54 2.6937e-53 180.29 45.39 37.43284.016 50.258 68.587 76.831 31.048 70.782 73.6857.08146.6097158.49 196.96 220.65 219.75 219.16 219.43 179.33196.7690.989161.47 100.7 90.549 150.87 141.84 138.77 112.71115.9554.983134.27 132.78 149.98 95.334 171.45 199.95 180.2945.39 37.432169.12 110.45 103.58 2.9703e-53 9.0361e-54 2.6937e-53158.49 196.96 220.65 2.9703e-53 9.0361e-54 2.6937e-53R EFERENCES[1]N. R. Pal, S.K. Pal, “A review on image segmentation techniques,”Pattern Recognition, vol. 26, pp. 1277-1291, Sep. 1993.[2]H. D. Chen, X. H. Jiang, Y. Sun, J.L. Wang, “Color image segmentation:advances and prospects,” Pattern Recognition, vol. 34, pp. 2259–2281,2001.[3] T. Uchiyama, M. A. Arbib, “Color image segmentation using competitivelearning,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.12, Dec. 1994, pp.1197-1206.[4] D. Comanniciu, E. Meer, “Robust analysis of feature space: color imagesegmentation,” in In Proc.IEEE Conf. on Computer Vision and PatternRecognition, Puerto Rico, 1997, pp. 750-755.[5]H. Palus, M. BogdaĔski, “Clustering techniques in color imagesegmentation,” in Proc. of Methods of Artificial Intelligence, Gliwice,Poland, 2003, pp. 103-104.[6] D. E. Rumelhart, D. Zipser, “Feature discovery by competitive learning,”Cognitive Science, vol. 9, pp. 75-112, 1985.[7]L. Xu, A. KrzyĨak, E. Oja, “Rival penalized competitive learning forclustering analysis, RBF net, and curve detection,” IEEE Trans. NeuralNetwork, vol. 4, pp.636-649, July, 1993.[8]S. C. Ahalt, A. K. Krishnamurthy, P. Chen, D. E. Melton, “Competitivelearning algorithms for vector quantization,” Neural Networks, vol. 3, pp.277-291, 1990.[9]Y. M. Cheung, “A competitive and cooperative learning approach torobust data clustering,” Dept. of Computer Science, Hong Kong BaptistUniversity, Technical Report: COMP-03-021, 2003.[10] Y. Ohta, T. Kanade, T. Sakai, “Color information for regionsegmentation,” Computer Graphics Image Process. vol. 13, pp. 222–241,1980.。

DSP彩色图像处理

DSP彩色图像处理

DSP彩色图像处理目录1引言 (3)1.1课程设计的目的 (3)1.2课程设计的要求 (3)2基本原理 (3)2.1DSP系统简介 (3)2.2设计平台CCS简介 (5)2.3TI5416实验板及硬件配置 (10)3 实现过程 (16)3.1程序流程图 (16)3.2算法的实现 (17)3.3软件仿真、调试及结果 (19)4 出现的问题及解决方法 (23)5 结束语 (24)6参考文献 (24)附录 (25)基于TI VC5416的YUV彩色图像处理摘要本课程设计主要是在TMS320VC5416 DSP芯片上完成编程,软件编程主要采用模块化的设计思想,把程序细化成易于实现的小模块,编程的语言主要C 语言编写程序。

在CCS仿真平台上通过汉字叠加算法最终成功实现YUV彩色图像处理。

通过最后的仿真结果可知,TMS320VC5416 芯片已完成了YUV彩色图像处理并可用于解决一些实际性的问题。

关键词汉字叠加算法;CCS3.3;TI VC5416;YUV彩色图像处理Abstract This course is designed primarily to complete the TMS320VC5416 DSP chip programming, software programming and primarily uses a modular design, easy to implement the program refined into a small module, the main programminglanguage C programming language. Simulation platform in the CCS by Chinese characters superimposed on the algorithm to achieve the ultimate success of YUV color image processing. Finally, the simulation results we can see through, TMS320VC5416 chip color image processing has been completed and can be used to solve some practical problems.Key words Character overlay algorithm; CCS3.3; TI VC5416; YUV color image processing1引言数字信号处理就是利用专用或通用的数字信号处理器(DSP-Digital SignalProcessor)以数字运算的方式对信号进行分析、提取、变换等处理。

基于DSP的图像灰度化处理方法研究

基于DSP的图像灰度化处理方法研究

目录摘要 (1)第一章绪论 (3)1.1论文研究背景与意义 (3)1.2国内外的研究现状 (3)1.3本文结构 (4)第二章图像灰度化的原理及MATLB仿真 (5)2.1引言 (5)2.2图像灰度化处理的基本原理与方法 (5)2.2.1RGB彩色图像转化灰度图像的原理与方法 (5)2.3基于MATLAB的仿真 (8)2.3.1MATLAB的简介 (8)2.3.2仿真结果及其分析 (8)第三章基于DSP的图像处理实现 (12)3.1引言 (12)3.2DSP系统平台与开发环境CCS简介 (12)3.3图像基于DSP的实现及结果分析 (13)3.4图像基于不同平台的比较分析 (16)第四章结论 (16)参考文献 (18)附录 (19)致谢 (23)摘要彩色图像的灰度化是图像处理的一个基本方法,在图像检测与识别、图像分析与处理等领域有着广泛的应用,把采集来的彩色图像进行灰度化处理,这样既可以提高后续算法速度,而且可以提高系统综合应用实效,达到更为理想的要求。

DSP强大的信号处理能力为实时图像信息处理提供了应用基础。

随着信息技术的高速发展,DSP在图形图像领域方面的应用越加广泛,如图形变换、图像压缩、图像传输、图像增强、图像识别等。

本文正是在研究基于DSP进行图像灰度化处理的过程中,首先基于不同颜色模型,即RGB与YUV模型分别进行图像分析,然后对图像灰度化处理的原理与方法进行了研究(本文重点研究的是图像的线性灰度变换),由于RGB与YUV模型在灰度化处理中原理相同,所以本文重点对RGB模型进行软件仿真。

在MATLAB软件环境中实现了对RGB彩色模型的仿真的基础上,又在DSP处理平台上完成程序设计,给出实验结果,并与MATLAB仿真结果相比较进行分析。

本文通过分量法、加权平均法、平均值法和最大值法这四种方法,依次对实现了彩色图像的灰度化处理,并对它们进行了对比分析。

最后,完成了彩色图像灰度化的仿真,由于MATLAB仿真结果与CCS处理结果十分接近,从而实现了此次研究的目的。

哈苏cvf-39数字后背使用说明书

哈苏cvf-39数字后背使用说明书
Fully supported models are: 202 FA, 203 FE, 205 TCC and 205 FCC. Other 200 and 2000 series models can be used with C type lenses only.
Hasselblad Natural Color Solution Color management solutions have in the past imposed ­limitations on professional digital photographers, because of the forced choice of a specific color profile to suit the job: capturing various skin tones, metals, fabrics, flowers etc. To combat this, Hasselblad has developed a new, powerful color profile to be used with its Phocus imaging software. Working with the new Hasselblad RGB color profile enables you to produce outstanding, reliable out-of-the-box colors, with skin tones, special product gradations, and other difficult colors reproduced effectively. To implement our new unique colors we have developed a new Hasselblad raw file format called 3F RAW (3FR). The new 3F RAW file format is designed to ensure that images captured on Hasselblad digital products are

A practical approach to digital signal processing

A practical approach to digital signal processing

An important part of our teaching is a set of stand-alone DSP hardware, developed in house, which have
provided useful and inexpensive platforms for demonstrating simple DSP algorithms in real-time and to support f n l year degree project work. Case studies are used to explore .real world problems and the ia interactions between DSP and other technologies, such as artificial neural networks and satellite communications.
DSP i Plymonth n
DSP is a very wide subject and as such can only be taught selectively. In Plymouth, the bulk of DSP is taught in the f d year of our B.Eng degree programmes. On the B.Eng Electrical & Electronic Engineering programme, the DSP course involves an in-depth study of analogue Input/Output design techniques (including sampling, quantisation, anti-aliasindimaging filtering and o v e m p l i n g techniques), discrete transforms (especially the Discrete Fourier Transform and the Fast Fourier Transform) and digital filter design, plus a selection of special topics (e.g. adaptive filters, speech processing and multi-rate processing). All the topics are illustrated with application examples to which the student can easily relate, these include biomedical engineering, digital audio and communication engineering. Students on the Communication Engineering degree programme cover broadly similar topics, but there is more emphasis on computer simulation using SPW (Signal Processing Workstation). Assignments currently set on SPW include modulation schemes (e.g. BPSK and QPSK), channel filtering (the raised cosine filter and its effect on Inter Symbol Interference), Bit Error Rate measurements with additive noise, clock recovery and error control coding. In the near future, SPW will be covered in the second1 year, which is common to all engineering degree students within the School. A prerequisite for DSP in the final year, is a second year course which covers the basic concepts of signal processing and coding.

减少散射线的方法

减少散射线的方法

减少散射线的方法Reducing scatter radiation is a critical concern in the field of medical imaging. Scatter radiation refers to the radiation that is deflected from its original path and can cause unnecessary exposure to both patients and healthcare professionals. This can lead to a variety of health risks and complications, making it essential to find effective ways to minimize scatter radiation in medical settings.减少散射辐射在医学成像领域是一个至关重要的问题。

散射辐射指的是从其原始路径偏离的辐射,可能导致患者和医护人员不必要的暴露。

这可能导致各种健康风险和并发症,因此在医疗环境中找到有效减少散射辐射的方法至关重要。

One effective way to reduce scatter radiation is through the use of appropriate shielding materials. Lead is a commonly used material for radiation shielding due to its high density and ability to absorb and block scatter radiation effectively. By strategically placing lead shields in key locations around the imaging equipment, healthcare professionals can greatly reduce scatter radiation exposure to themselves and their patients.减少散射辐射的一种有效方法是使用适当的屏蔽材料。

volumeflowrate容积流量

volumeflowrate容积流量

Volumeflow rateDisya Ratanakorn∗,Jesada KeandaoungchanDivision of Neurology,Department of Medicine,Faculty of Medicine Ramathibodi Hospital,Mahidol University,Bangkok,ThailandKEYWORDSVolumeflow rate; Doppler method; Color velocity imagingquantification; Quantitativeflow measurement system; Angle-independent Doppler technique by QuantixND system Summary Vascular imaging of carotid and vertebral arteries may not be sufficient to evaluate the patients with stroke and other cerebrovascular disorders.Cerebral bloodflow measurement can add information to increase the accuracy in diagnosis,assessment,and plan of management in these patients.There are many noninvasive quantitative methods to measure cerebral blood flow including volumeflow rate measured by ultrasound.This article addresses mainly the dif-ferent ultrasound techniques to measure cerebral bloodflow.Clinical applications,volumeflow rate in normal and abnormal conditions with a case example,and advantage and disadvantage of the ultrasound techniques are also described.©2012Elsevier GmbH.IntroductionVascular imaging of carotid and vertebral arteries may not be sufficient to evaluate the patients with stroke and other cerebrovascular disorders.Cerebral bloodflow(CBF)mea-surement can add information to increase the accuracy in diagnosis,assessment,and plan of management in these patients.Methods for measurement of cerebral bloodflowThere are many noninvasive quantitative methods to measure CBF including stable xenon-enhanced computed tomography,single-photon emission computed tomogra-phy,positron-emission tomography,and magnetic resonance imaging.These methods are reliable and accurate for CBF measurement.However,they are rather expensive and requiring to transfer patients to the imaging or radio-nuclei facility which may be a limitation in the critical ill,sedated, or ventilated patients[1].∗Corresponding author.T el.:+6622012318;fax:+6622011645.E-mail address:****************.th(D.Ratanakorn).Volumeflow rate measurement by ultrasound Several ultrasound methods have been used to measure vol-umeflow rate(VFR)of CBF such as Doppler method[2],color velocity imaging quantification(CVIQ)[3],quantitativeflow measurement system(QFM)[4,5],and angle-independent Doppler technique by QuantixND system[6].The common carotid artery(CCA)is quite accessible and reliable to mea-sure VFR,whereas it is more difficult to obtain reliable VFR in the internal carotid artery(ICA)or vertebral artery(VA) due to the deeper vessels.VFR measurements are usually obtained at1.5—2.0cm below carotid bifurcation in CCA, 1—2cm above carotid bifurcation in ICA,and between the 4th and5th cervical vertebra in the inter-osseous segment of VA using high-resolution linear probe with pulsed Doppler imaging[7].Doppler methodDoppler method can estimate VFR at a specific point in a vessel by multiplying theflow velocity with cross-sectional lumen diameter at that specific point in time(Fig.1). However,Doppler method does not provide a profile of instantaneous peak velocities across the entire vessel and2211-968X©2012Elsevier GmbH./10.1016/j.permed.2012.03.008Open access under CC BY-NC-ND license.Open access under CC BY-NC-ND license.204 D.Ratanakorn,J.KeandaoungchanFigure1VFR measurements using Doppler method in CCA(A), ICA(B),and VA(C)with large sample volumes across the entire vessel lumens.cannot adjust for changes in theflow lumen throughout the cardiac cycle.Color velocity imaging quantificationCVIQ measures VFR by using time-domain processing with color velocity imaging combined with a synchronous M-mode color display to provide an instantaneous profile of the peak velocities across theflow lumen as well as a continuous estimate of the diameter of theflow lumen throughout the cardiac cycle(Fig.2).By assuming a circular vessel and axial symmetricalflow,CVIQ can be calculated automatically with built-in software.Quantitativeflow measurement systemQFM is comprised of two components.One component uses one transducer with ultrasonic echo tracking tomeasure Figure2VFR measurement using CVIQ in CCA with the opti-mal color box across the entire lumen in M-mode display(A) and synchronous instantaneous peak velocities across theflow lumen(B).With permission from Professor Charles H.T egeler.vessel diameter,and the other uses three transducers with continuous Doppler independent of incident angles to mea-sure absolute bloodflow velocity.QFM can be calculated using a vessel diameter in cross-sectional area and the abso-lute bloodflow velocity.Angle-independent Doppler technique by QuantixND systemQuantixND system is an angle-independent Doppler tech-nique which employs dual ultrasound beams within one insonating probe in a defined angle to each other.The real time information is stored automatically and analyzed by the computer.VFR measured by CVIQ and Doppler method The mean values of VFR in50healthy subjects as mea-sured by CVIQ and Doppler method are340.9±75.6and 672.8±152.9ml/min for CCA,226.9±65.0and316.2±89.1 for ICA,and92.2±36.7and183.5±90.8for ECA,respec-tively[2].VFR is higher in male compared to those in female and decreasing with increasing age.Doppler method tends to overestimate VFR and CVIQ seems to be more accurate than Doppler method to measure the carotid artery VFR. However,CCA VFR measured by CVIQ and Doppler method has no difference in0-95%ICA stenosis but CCA VFR byVolumeflow rate205Figure3CCA VFR measured by Doppler method in a46-year-old male with right ICA occlusion.Colorflow imaging shows right ICA occlusion(A)and normal left ICA(B)with right CCA VFR of159ml/min(C),and left CCA VFR of493ml/min(D).Doppler method is higher than that measured by CVIQ in 95—100%ICA stenosis[8].Clinical applicationsVFR measurement can be useful for grading carotid stenosis especially with coexisting contra-lateral carotid stenosis or occlusion to avoid overestimation of degree stenosis by using onlyflow velocity criteria,evaluating collateralflow and cerebrovascular reserve,identification of feeders and use as follow-up study in intra-cranial arteriovenous malforma-tion,quantification of hemodynamic changes in subclavian steal syndrome,assessment of vasospasm in subarachnoid hemorrhage,and monitoring of CBF before and after carotid endarterectomy[9,10].In addition,there is a direct corre-lation between middle cerebral artery meanflow velocity (MCA Vm),CCA VFR,and end-expiratory CO2in normal sub-jects.The MCA Vm and CCA VFR increase6.1%and5.3%per mmHg increase in end-expiratory CO2,respectively,and the MCA Vm increases0.3cm/s for each1ml/min increase in CCA VFR[11].Therefore,measurement of CCA VFR changes during CO2inhalation may be an alternative method to mea-sure cerebral vasoreactivity in the patients with inadequate temporal windows.VFR in carotid stenosisCCA VFR measured by Doppler method and CVI-Q at different degree of carotid stenosis are359±130and 337±96ml/min,respectively,for the individuals without ICA stenosis,310±99and293±133ml/min for50—75%ICA stenosis,347±80and195±131ml/min for75—95% ICA stenosis,152±36and63±25ml/min for95—99%ICA stenosis,and125±47and58±22ml/min for ICA occlusion [8].The reduction of ipsilateral CCA VFR is present in the patients with severe ICA stenosis of75—99%or ICA occlusion as shown in Fig.3.ConclusionsWhen comparing with other brain perfusion imaging tech-niques,VFR obtained with ultrasound does not provide values for each brain region,but represents only one value for each supplying vessel[10].It may be limited by oper-ator dependent,extra examination time,requirement for patient cooperation,extensive plaque formation,turbulent flow,and tortuous and asymmetrical vessels.Nevertheless, VFR measured by ultrasound is still the easiest,feasible, noninvasive,and repeatable bedside examination with no exposure to contrast media or radiation.Appendix A.Supplementary data Supplementary data associated with this article can be found,in the online version,at /10.1016/ j.permed.2012.03.008.References[1]Markus HS.Cerebral perfusion and stroke.J Neurol NeurosurgPsychiatry2004;75:353—61.206 D.Ratanakorn,J.Keandaoungchan[2]Ho SSY,Metreweli C.Preferred technique for blood vol-ume measurement in cerebrovascular disease.Stroke 2000;31:1342—5.[3]Eicke BM,T egeler CH,Howard G,Myers LG.In vitro reliablityofflow volume measurements with color velocity imaging.J Ultrasound Med1993;12:543.[4]Furuhata H,Sugano R,Kodaira K,Aoyagi T,Matsumoto H,Hayashi J,et al.An ultrasonic Doppler method designed for the measurement of absolute blood velocity values.Med Elec Bioeng1978;16:264—8.[5]Mizukami M,Yamaguchi K,Yunoki K.Evaluation of occlu-sive cerebrovascular disease using ultrasonic quantitativeflow measurement.Stroke1981;12:793—8.[6]Schebesch KM,Simka S,Woertgen C,Brawanski A,Rotho-erl RD.Normal values of volumeflow in the internal carotid artery measured by a new angle-independent Doppler tech-nique for evaluating cerebral perfusion.Acta Neurochir(Wien) 2004;146:983—6.[7]Peter Scheel MD,Christian Ruge,Uwe R,Petruch MD,MartinSchöning MD.Color Duplex measurement of cerebral bloodflow volume in healthy adults.Stroke2000;31:147—50.[8]Likittanasombut P,Reynolds P,Meads D,T egeler C.Volumeflowrate of common carotid artery measured by Doppler method and color velocity imaging quantification(CVI-Q).J Neuroimag-ing2006;16:34—8.[9]T egeler CH,Ratanakorn D,Neurosonology.In:Fisher M,Bogousslavsky J,editors.T extbook of neurology.Newton,MA: Butterworth Heinemann;1998.p.101—18.[10]Wintermark M,Sesay M,Barbier E,Borbély K,Dillon WP,Eastwood JD,et parative overview of brain perfusion imaging techniques.Stroke2005;36:e83—99.[11]Ratanakorn D,Greenberg J,Meads DB,T egeler CH.Middlecerebral arteryflow velocity correlates with common carotid artery volumeflow rate after CO2inhalation.J Neuroimaging 2001;11:401—5.。

英语颜色教学技术介绍

英语颜色教学技术介绍
Colors can be used to create visual images and make language more vivid and engaging
The Use of Colors in English
Color teaching can help learners better understand and remember vocabulary
Challenge
Challenge: How to ensure accurate color teaching
Solution
Gamification can make color teaching more exciting and engaging for students By incorporating elements of competition, collaboration, and reward systems, students are more like to stay engaged and interested in the subject matter
Seek out reproducible educational publishers or online resources that specialize in color teaching materials These resources can include workbooks, online courses, videos, and other materials that will enhance your students' learning experience
Picture method
Summary

基于暗原色先验的伪彩色图像均衡化增强系统设计

基于暗原色先验的伪彩色图像均衡化增强系统设计

0

强,并且该方法在多个领域中均有所应用,包括可视化

图像、卫星影像、军事、遥感、侦察等 [1]。这些领域的图像
图像增强能够对图像中蕴含的有用信息进行突出
数据普遍存在着分辨率较低的问题,而分辨率是决定图
表达,是图像数据应用领域的重要研究内容。领域的图像中,通
像进行复制、接收、传输、获取等都可能导致图像质量产

450064)
要:针对传统伪彩色图像均衡化增强系统色彩灵敏度较低的问题,提出一种基于暗原色先验的伪彩色图像均衡化
增强系统。系统硬件由初始化模块、伪彩色图像处理模块、伪彩色图像存储模块、伪彩色图像均衡化增强模块、伪彩色图像
显示模块构成;系统软件配置为伪彩色图像均衡化处理软件,通过硬件与软件的结合实现了伪彩色图像的均衡化增强。为
based on dark channel prior
MENG Xian
(Zhengzhou University of Science and Technology,Zhengzhou 450064,China)
Abstract:In view of the low color sensitivity of traditional pseudo⁃color image equalization enhancement system,a pseudo⁃
variance and the pseudo ⁃ color image equalization enhancement system based on single precision floating ⁃ point instruction. The
experimental results show that the proposed system has the highest color sensitivity and is more suitable for pseudo⁃color image

Chapter16-Color image processing

Chapter16-Color image processing

CHAPTER16COLOR IMAGE PROCESSINGWHAT WILL WE LEARN?•What are the most important concepts and terms related to color perception?•What are the main color models used to represent and quantify color?•How are color images represented in MATLAB?•What is pseudocolor image processing and how does it differ from full-color image processing?•How can monochrome image processing techniques be extended to color images?16.1THE PSYCHOPHYSICS OF COLORColor perception is a psychophysical phenomenon that combines two main compo-nents:1.The physical properties of light sources(usually expressed by their spectralpower distribution(SPD))and surfaces(e.g.,their absorption and reflectance capabilities).2.The physiological and psychological aspects of the human visual system(HVS). Practical Image and Video Processing Using MATLAB®.By Oge Marques.©2011John Wiley&Sons,Inc.Published2011by John Wiley&Sons,Inc.387388COLOR IMAGE PROCESSING In this section,we expand on the discussion started in Section5.2.4and present the main concepts involved in color perception and representation.16.1.1Basic ConceptsThe perception of color starts with a chromatic light source,capable of emitting elec-tromagnetic radiation with wavelengths between approximately400and700nm.Part of that radiation reflects on the surfaces of the objects in a scene and the resulting reflected light reaches the human eye,giving rise to the sensation of color.An object that reflects light almost equally in all wavelengths within the visible spectrum is per-ceived as white,whereas an object that absorbs most of the incoming light,regardless of the wavelength,is seen as black.The perception of several shades of gray between pure white and pure black is usually referred to as achromatic.Objects that have more selective properties are considered chromatic,and the range of the spectrum that they reflect is often associated with a color name.For example,an object that absorbs most of the energy within the565–590nm wavelength range is considered yellow.A chromatic light source can be described by three basic quantities:•Intensity(or Radiance):the total amount of energy thatflows from the light source,measured in watts(W).•Luminance:a measure of the amount of information an observer perceives froma light source,measured in lumen(lm).It corresponds to the radiant power ofa light source weighted by a spectral sensitivity function(characteristic of theHVS).•Brightness:the subjective perception of(achromatic)luminous intensity.The human retina(the surface at the back of the eye where images are projected) is coated with photosensitive receptors of two different types:cones and rods.Rods cannot encode color but respond to lower luminance levels and enable vision under darker conditions.Cones are primarily responsible for color perception and oper-ate only under bright conditions.There are three types of cone cells(L cones,M cones,and S cones,corresponding to long(≈610nm),medium(≈560nm),and short(≈430nm)wavelengths,respectively)whose spectral responses are shown in Figure16.1.1The existence of three specialized types of cones in the human eye was hypoth-esized more than a century before it could be confirmed experimentally by Thomas Young and his trichromatic theory of vision in1802.Young’s theory explains only part of the color vision process,though.It does not explain,for instance,why it is possible to speak of‘bluish green’colors,but not‘bluish yellow’ones.Such un-derstanding came with the opponent-process theory of color vision,brought forth by Edward Herring in1872.The colors to which the cones respond more strongly are known as the primary colors of light and have been standardized by the CIE 1Thefigure also shows the spectral absorption curve for rods,responsible for achromatic vision.THE PSYCHOPHYSICS OF COLOR389FIGURE16.1Spectral absorption curves of the short(S),medium(M),and long(L)wave-length pigments in human cone and rod(R)cells.Courtesy of Wikimedia Commons. (Commission Internationale de L’´Eclairage—International Commission on Illumi-nation,an organization responsible for color standards)as red(700nm),green (546.1nm),and blue(435.8nm).The secondary colors of light,obtained by additive mixtures of the primaries,two colors at a time,are magenta(or purple)=red+blue,cyan(or turquoise)=blue+ green,and yellow=green+red(Figure16.2a).For color mixtures using pigments(or paints),the primary colors are magenta, cyan,and yellow and the secondary colors are red,green,and blue(Figure16.2b).It is important to note that for pigments a color is named after the portion of the spectrum(b)(a)FIGURE16.2Additive(a)and subtractive(b)color mixtures.390COLOR IMAGE PROCESSING that it absorbs,whereas for light a color is defined based on the portion of the spectrum that it emits.Consequently,mixing all three primary colors of light results in white (i.e.,the entire spectrum of visible light),whereas mixing all three primary colors of paints results in black(i.e.,all colors have been absorbed,and nothing remains to reflect the incoming light).The use of the expression primary colors to refer to red,green,and blue may lead to a common misinterpretation:that all visible colors can be obtained by mixing different amounts of each primary color,which is not true.A related phenomenon of color perception,the existence of color metamers,may have contributed to this confusion.Color metamers are combinations of primary colors(e.g.,red and green) perceived by the HVS as another color(in this case,yellow)that could have been produced by a spectral color offixed wavelength(of≈580nm).16.1.2The CIE XYZ Chromaticity DiagramIn color matching experiments performed in the late1920s,subjects were asked to adjust the amount of red,green,and blue on one patch that were needed to match a color on a second patch.The results of such experiments are summarized in Figure16.3. The existence of negative values of red and green in thisfigure means that the second patch should be made brighter(i.e.,equal amounts of red,green,and blue has to be added to the color)for the subjects to report a perfect match.Since adding amounts of primary colors on the second patch corresponds to subtracting them in thefirst, negative values can occur.The amounts of three primary colors in a three-component additive color model(on thefirst patch)needed to match a test color(on the second patch)are called tristimulus values.FIGURE16.3RGB color matching function(CIE1931).Courtesy of Wikimedia Commons.THE PSYCHOPHYSICS OF COLOR391FIGURE16.4XYZ color matching function(CIE1931).Courtesy of Wikimedia Commons.To remove the inconvenience of having to deal with(physically impossible)neg-ative values to represent observable colors,in1931the CIE adopted standard curves for a hypothetical standard(colorimetric)observer,considered to be the chromatic response of the average human viewing through a2◦angle,due to the belief at that time that the cones resided within a2◦arc of the fovea.2These curves specify how a SPD corresponding to the physical power(or radiance)of the light source can be transformed into a set of three numbers that specifies a color.These curves are not based on the values of R,G,and B,but on a new set of tristimulus values:X,Y, and Z.This model,whose color matching functions are shown in Figure16.4,is known as the CIE XYZ(or CIE1931)model.The tristimulus values of X,Y,and Z in Figure16.4are related to the values of R,G,and B in Figure16.3by the following linear transformations:⎡⎢⎣XYZ⎤⎥⎦=⎡⎢⎣0.4310.3420.1780.2220.7070.0710.0200.1300.939⎤⎥⎦⎡⎢⎣RGB⎤⎥⎦(16.1)and⎡⎢⎣RGB⎤⎥⎦=⎡⎢⎣3.063−1.393−0.476−0.9691.8760.0420.068−0.2291.069⎤⎥⎦⎡⎢⎣XYZ⎤⎥⎦(16.2)2This understanding was eventually revised and a new model,CIE1964,for a10◦standard observer,was produced.Interestingly enough,the CIE1931model is more popular than the CIE1964alternative even today.392COLOR IMAGE PROCESSING The CIE XYZ color space was designed so that the Y parameter corresponds to a measure of the brightness of a color.The chromaticity of a color is specified by two other parameters,x and y,known as chromaticity coordinates and calculated asx=XX+Y+Z(16.3)y=YX+Y+Z(16.4)where x and y are also called normalized tristimulus values.The third normalized tristimulus value,z,can just as easily be calculated asz=ZX+Y+Z(16.5)Clearly,a combination of the three normalized tristimulus values results inx+y+z=1(16.6) The resulting CIE XYZ chromaticity diagram(Figure16.5)allows the mapping of a color to a point of coordinates(x,y)corresponding to the color’s chromaticity. The complete specification of a color(chromaticity and luminance)takes the form of an xyY triple.3To recover X and Z from x,y,and Y,we can useX=xyY(16.7)Z=1−x−yyY(16.8)The resulting CIE XYZ chromaticity diagram shows a horseshoe-shaped outer curved boundary,representing the spectral locus of wavelengths(in nm)along the visible light portion of the electromagnetic spectrum.The line of purples on a chro-maticity diagram joins the two extreme points of the spectrum,suggesting that the sensation of purple cannot be produced by a single wavelength:it requires a mixture of shortwave and longwave light and for this reason purple is referred to as a non-spectral color.All colors of light are contained in the area in(x,y)bounded by the line of purples and the spectral locus,with pure white at its center.The inner triangle in Figure16.6a represents a color gamut,that is,a range of colors that can be produced by a physical device,in this case a CRT monitor.Different color image display and printing devices and technologies exhibit gamuts of different shape and size,as shown in Figure16.6.As a rule of thumb,the larger the gamut,the better the device’s color reproduction capabilities.3This explains why this color space is also known as CIExyY color space.THE PSYCHOPHYSICS OF COLOR393FIGURE16.5CIE XYZ color model.FIGURE16.6Color gamut for three different devices:(a)CRT monitor;(b)printer;(c)film. The RGB triangle is the same in allfigures to serve as a reference for comparison.16.1.3Perceptually Uniform Color SpacesOne of the main limitations of the CIE XYZ chromaticity diagram lies in the fact that a distance on the xy plane does not correspond to the degree of difference between two colors.This was demonstrated in the early1940s by David MacAdam,who conducted394COLOR IMAGE PROCESSINGFIGURE16.7MacAdam ellipses overlapped on the CIE1931chromaticity diagram.Cour-tesy of Wikimedia Commons.experiments asking subjects to report noticeable changes in color(relative to a starting color stimulus).The result of that study is illustrated in Figure16.7,showing the resulting MacAdam ellipses:regions on the chromaticity diagram corresponding to all colors that are indistinguishable,to the average human eye,from the color at the center of the ellipse.The contour of the ellipse represents the just noticeable differences(JNDs)of chromaticity.Based on the work of MacAdam,several CIE color spaces—most notably the CIE L*u*v*(also known as CIELUV)and the CIE L*a*b* (also known as CIELAB)—were developed,with the goal of achieving perceptual uniformity,that is,have an equal distance in the color space corresponding to equal differences in color.In MATLABThe IPT has an extensive support for conversion among CIE color spaces.Con-verting from one color space to another is usually accomplished by using function makecform(to create a color transformation structure that defines the desired color space conversion)followed by applycform,which takes the color transformation structure as a parameter.THE PSYCHOPHYSICS OF COLOR395 TABLE16.1IPT Functions for CIE XYZ and CIELAB Color Spaces Function Descriptionxyz2double Converts an M×3or M×N×3array of XYZ color values todoublexyz2uint16Converts an M×3or M×N×3array of XYZ color values touint16lab2double Converts an M×3or M×N×3array of L*a*b*color values todoublelab2uint16Converts an M×3or M×N×3array of L*a*b*color values touint16lab2uint8Converts an M×3or M×N×3array of L*a*b*color values touint8whitepoint Returns a3×1vector of XYZ values scaled so that Y=1 Other functions for manipulation of CIE XYZ and CIELAB values are listed in Table16.1.16.1.4ICC ProfilesAn ICC(International Color Consortium)profile is a standardized description of a color input or output device,or a color space,according to standards established by the ICC.Profiles are used to define a mapping between the device source or target color space and a profile connection space(PCS),which is either CIELAB(L*a*b*) or CIEXYZ.In MATLABThe IPT has several functions to support ICC profile operations.They are listed in Table16.2.TABLE16.2IPT Functions for ICC Profile ManipulationFunction Descriptioniccread Reads an ICC profile into the MATLAB workspaceiccfind Finds ICC color profiles on a system,or a particular ICC color profilewhose description contains a certain text stringiccroot Returns the name of the directory that is the default system repositoryfor ICC profilesiccwrite Writes an ICC color profile to diskfile396COLOR IMAGE PROCESSING16.2COLOR MODELSA color model(also called color space or color system)is a specification of a coor-dinate system and a subspace within that system where each color is represented by a single point.There have been many different color models proposed over the last400years. Contemporary color models have also evolved to specify colors for different pur-poses(e.g.,photography,physical measurements of light,color mixtures,etc.).In this section,we discuss the most popular color models used in image processing. 16.2.1The RGB Color ModelThe RGB color model is based on a Cartesian coordinate system whose axes represent the three primary colors of light(R,G,and B),usually normalized to the range[0,1] (Figure16.8).The eight vertices of the resulting cube correspond to the three primary colors of light,the three secondary colors,pure white,and pure black.Table16.3 shows the R,G,and B values for each of these eight vertices.RGB color coordinates are often represented in hexadecimal notation,with indi-vidual components varying from00(decimal0)to FF(decimal255).For example,a pure(100%saturated)red would be denoted FF0000,whereas a slightly desaturated yellow could be written as CCCC33.The number of discrete values of R,G,and B is a function of the pixel depth, defined as the number of bits used to represent each pixel:a typical value isFIGURE16.8RGB color model.COLOR MODELS397 TABLE16.3R,G,and B Values for Eight Representative ColorsCorresponding to the Vertices of the RGB CubeColor Name R G BBlack000Blue001Green010Cyan011Red100Magenta101Yellow110White11124bits=3image planes×8bits per plane.The resulting cube—with more than 16million possible color combinations—is shown in Figure16.9.In MATLABThe RGB cube in Figure16.9was generated using patch,a graphics function for creating patch graphics objects,made up of one or more polygons,which can be specified by passing the coordinates of their vertices and the coloring and lighting of the patch as parameters.FIGURE16.9RGB color cube.398COLOR IMAGE PROCESSING 16.2.2The CMY and CMYK Color ModelsThe CMY model is based on the three primary colors of pigments(c yan,m agenta,and y ellow).It is used for color printers,where each primary color usually corresponds to an ink(or toner)cartridge.Since the addition of equal amounts of each primary to produce black usually produces unacceptable,muddy looking black,in practice,a fourth color,blac K,is added,and the resulting model is called CMYK.The conversion from RGB to CMY is straightforward:⎡⎢⎣CMY⎤⎥⎦=⎡⎢⎣111⎤⎥⎦−⎡⎢⎣RGB⎤⎥⎦(16.9)The inverse operation(conversion from CMY to RGB),although it is equally easy from a mathematical standpoint,is of little practical use.In MATLABConversion between RGB and CMY in MATLAB can also be accomplished using the imcomplement function.16.2.3The HSV Color ModelColor models such as the RGB and CMYK described previously are very convenient to specify color coordinates for display or printing,respectively.They are not,how-ever,useful to capture a typical human description of color.After all,none of us goes to a store looking for a FFFFCC shirt to go with the FFCC33jacket we got for our birthday.Rather,the human perception of color is best described in terms of hue, saturation,and lightness.Hue describes the color type,or tone,of the color(and very often is expressed by the“color name”),saturation provides a measure of its purity (or how much it has been diluted in white),and lightness refers to the intensity of light reflected from objects.For representing colors in a way that is closer to the human description,a family of color models have been proposed.The common aspect among these models is their ability to dissociate the dimension of intensity(also called brightness or value)from the chromaticity—expressed as a combination of hue and saturation—of a color.We will look at a representative example from this family:the HSV(hue–saturation–value)color model.4The HSV(sometimes called HSB)color model can be obtained by looking at the RGB color cube along its main diagonal(or gray axis),which results in a hexagon-shaped color palette.As we move along the main axis in the pyramid in Figure16.10,4The terminology for color models based on hue and saturation is not universal,which is unfortunate. What we call HSV in this book may appear under different names and acronyms elsewhere.Moreover,the distinctions among these models are very subtle and different acronyms might be used to represent slight variations among them.COLOR MODELS399FIGURE16.10The HSV color model as a hexagonal cone.the hexagon gets smaller,corresponding to decreasing values of V,from1(white)to 0(black).For any hexagon,the three primary and the three secondary colors of light are represented in its vertices.Hue,therefore,is specified as an angle relative to the origin(the red axis by convention).Finally,saturation is specified by the distance to the axis:the longer the distance,the more saturated the color.Figure16.11shows an alternative representation of the HSV color model in which the hexcone is replaced by a cylinder.Figure16.12shows yet another equivalent three-dimensional representation for the HSV color model,as a cone with circular-shaped base.In summary,the main advantages of the HSV color model(and its closely related alternatives)are its ability to match the human way of describing colors and to al-low for independent control over hue,saturation,and intensity(value).The ability to isolate the intensity component from the other two—which are often collectively called chromaticity components—is a requirement in many color image processing algorithms,as we shall see in Section16.5.Its main disadvantages include the discon-tinuity in numeric values of hue around red,the computationally expensive conversion to/from RGB,and the fact that hue is undefined for a saturation of0.In MATLABConverting between HSV and RGB in MATLAB can be accomplished by the func-tions rgb2hsv and hsv2rgb.Tutorial16.2explores these functions in detail.400COLOR IMAGE PROCESSINGFIGURE16.11The HSV color model as a cylinder.FIGURE16.12The HSV color model as a cone.REPRESENT A TION OF COLOR IMAGES IN MA TLAB401 16.2.4The YIQ(NTSC)Color ModelThe NTSC color model is used in the American standard for analog television,which will be described in more detail in Chapter20.One of the main advantages of this model is the ability to separate grayscale contents from color data,a major design requirement at a time when emerging color TV sets and transmission equipment had to be backward compatible with their B&W predecessors.In the NTSC color model, the three components are luminance(Y)and two color-difference signals hue(I)and saturation(Q).5Conversion from RGB to YIQ can be performed using the transformation⎡⎢⎣YIQ⎤⎥⎦=⎡⎢⎣0.2990.5870.1140.596−0.274−0.3220.211−0.5230.312⎤⎥⎦⎡⎢⎣RGB⎤⎥⎦(16.10)In MATLABConverting between RGB and YIQ(NTSC)in MATLAB is accomplished using functions rgb2ntsc and ntsc2rgb.16.2.5The YCbCr Color ModelThe YCbCr color model is the most popular color representation for digital video.6In this format,one component represents luminance(Y),while the other two are color-difference signals:Cb(the difference between the blue component and a reference value)and Cr(the difference between the red component and a reference value).Conversion from RGB to YCbCr is possible using the transformation⎡⎢⎣YCbCr⎤⎥⎦=⎡⎢⎣0.2990.5870.114−0.169−0.3310.5000.500−0.419−0.081⎤⎥⎦⎡⎢⎣RGB⎤⎥⎦(16.11)In MATLABConverting between YCbCr and RGB in MATLAB can be accomplished by the functions rgb2ycbcr and ycbcr2rgb.16.3REPRESENTATION OF COLOR IMAGES IN MATLABAs we have seen in Chapter2,color images are usually represented as RGB(24bits per pixel)or indexed with a palette(color map),usually of size256.These representation5The choice for letters I and Q stems from the fact that one of the components is i n phase,whereas the other is off by90◦,that is,in q uadrature.6More details in Chapter20.402COLOR IMAGE PROCESSINGFIGURE16.13RGB color image representation.modes are independent offile format(although GIF images are usually indexed and JPEG images typically are not).In this section,we provide a more detailed analysis of color image representation in MATLAB.16.3.1RGB ImagesAn RGB color image in MATLAB corresponds to a3D array of dimensions M×N×3,where M and N are the image’s height and width(respectively)and3is the number of color planes(channels).Each color pixel is represented as a triple containing the values of its R,G,and B components(Figure16.13).Each individual array of size M×N is called a component image and corresponds to one of the color channels: red,green,or blue.The data class of the component images determines their range of values.For RGB images of class double,the range of values is[0.0,1.0],whereas for classes uint8or uint16,the ranges are[0,255]and[0,65535],respectively.RGB images typically have a bit depth of24bits per pixel(8bits per pixel per component image),resulting in a total number of(28)3=16,777,216colors.EXAMPLE16.1The following MATLAB sequence can be used to open,verify the size(in this case, 384×512×3)and data class(in this case,uint8),and display an RGB color image (Figure16.14).I=imread(’peppers.png’);size(I)class(I)subplot(2,2,1),imshow(I),title(’Color image(RGB)’)REPRESENT A TION OF COLOR IMAGES IN MA TLAB403FIGURE16.14RGB image and its three color components(or channels).Original image: courtesy of MathWorks.subplot(2,2,2),imshow(I(:,:,1)),title(’Red component’)subplot(2,2,3),imshow(I(:,:,2)),title(’Green component’) subplot(2,2,4),imshow(I(:,:,3)),title(’Blue component’)16.3.2Indexed ImagesAn indexed image is a matrix of integers(X),where each integer refers to a particular row of RGB values in a secondary matrix(map)known as a color map.The image can be represented by an array of class uint8,uint16,or double.The color map array is an M×3matrix of class double,where each element’s value is within the range[0.0,1.0].Each row in the color map represents R(red),G(green),and B(blue) values,in that order.The indexing mechanism works as follows(Figure16.15):if X is of class uint8or uint16,all components with value0point to thefirst row in map,all components with value1point to the second row,and so on.If X is of class double,all components with value less than or equal to1.0point to thefirst row and so on.404COLOR IMAGE PROCESSINGFIGURE16.15Indexed color image representation.MATLAB has many built-in color maps(which can be accessed using the function colormap),briefly described in Table16.4.In addition,you can easily create your own color map by defining an array of class double and size M×3,where each element is afloating-point value in the range[0.0,1.0].EXAMPLE16.2The following MATLAB sequence can be used to load a built-in indexed image,verify its size(in this case,200×300)and data class(in this case,double),verify the TABLE16.4Color Maps in MATLABName Descriptionhsv Hue–saturation–value color maphot Black–red–yellow–white color mapgray Linear gray-scale color mapbone Gray scale with tinge of blue color mapcopper Linear copper-tone color mappink Pastel shades of pink color mapwhite All white color mapflag Alternating red,white,blue,and black color maplines Color map with the line colorscolorcube Enhanced color-cube color mapvga Windows color map for16colorsjet Variant of HSVprism Prism color mapcool Shades of cyan and magenta color mapautumn Shades of red and yellow color map.spring Shades of magenta and yellow color mapwinter Shades of blue and green color mapsummer Shades of green and yellow color mapREPRESENT A TION OF COLOR IMAGES IN MA TLAB405FIGURE16.16A built-in indexed image.Original image:courtesy of MathWorks. color map’s size(in this case,81×3)and data class(in this case,double),and display the image(Figure16.16).load clownsize(X)class(X)size(map)class(map)imshow(X,map),title(’Color(Indexed)’)MATLAB has many useful functions for manipulating indexed color images:•If we need to approximate an indexed image by one with fewer colors,we canuse MATLAB’s imapprox function.•Conversion between RGB and indexed color images is straightforward,thanks to functions rgb2ind and ind2rgb.•Conversion from either color format to their grayscale equivalent is equally easy, using functions rgb2gray and ind2gray.•We can also create an index image from an RGB image by dithering the original image using function dither.•Function grayslice creates an indexed image from an intensity(grayscale) image by thresholding and can be used in pseudocolor image processing(Sec-tion16.4).•Function gray2ind converts an intensity(grayscale)image into its indexed image equivalent.It is different from grayslice.In this case,the resulting406COLOR IMAGE PROCESSING image is monochrome,just as the original one;7only the internal data represen-tation has changed.16.4PSEUDOCOLOR IMAGE PROCESSINGThe purpose of pseudocolor image processing techniques is to enhance a monochrome image for human viewing purposes.Their rationale is that subtle variations of gray levels may very often mask or hide regions of interest within an image. This can be particularly damaging if the masked region is relevant to the appli-cation domain(e.g.,the presence of a tumor in a medical image).Since the hu-man eye is capable of discerning thousands of color hues and intensities,compared to only less than100shades of gray,replacing gray levels with colors leads to better visualization and enhanced capability for detecting relevant details within the image.The typical solution consists of using a color lookup table(LUT)designed to map the entire range of(typically256)gray levels to a(usually much smaller)number of colors.For better results,contrasting colors should appear in consecutive rows in the LUT.The term pseudo color is used to emphasize the fact that the assigned colors usually have no correspondence whatsoever with the truecolors that might have been present in the original image.16.4.1Intensity SlicingThe technique of intensity(or density)slicing is the simplest and best-known pseu-docoloring technique.If we look at a monochrome image as if it were a3D plot of gray levels versus spatial coordinates,where the most prominent peaks correspond to the brightest pixels,the technique corresponds to placing several planes paral-lel to the coordinate plane of the image(also known as the xy plane).Each plane “slices”the3D function in the area of intersection,resulting in several gray-level intervals.Each side of the plane is then assigned a different color.Figure16.17 shows an example of intensity slicing using only one slicing plane at f(x,y)=l i and Figure16.18shows an alternative representation,where the chosen colors are indi-cated as c1,c2,c3,and c4.The idea can easily be extended to M planes and M+1 intervals.In MATLABIntensity slicing can be accomplished using the grayslice function.You will learn how to use this function in Tutorial16.1.7The only possible differences would have resulted from the need to requantize the number of gray levels if the specified color map is smaller than the original number of gray levels.。

Digital Signal Processing

Digital Signal Processing

Digital Signal Processing Digital Signal Processing (DSP) is a crucial aspect of modern technology, playing a significant role in various fields such as telecommunications, audio processing, image processing, and more. It involves manipulating digital signals to improve their quality, extract useful information, or compress data forefficient storage and transmission. DSP algorithms are used to process signals in real-time, making it an essential tool in many applications. One of the key advantages of DSP is its ability to perform complex mathematical operations on signals quickly and accurately. This allows for real-time processing of signals, making it ideal for applications where speed is essential, such as in audio and video streaming. DSP algorithms can be optimized to run efficiently on specialized hardware, further improving their performance and reducing latency. In the field of telecommunications, DSP is used in a wide range of applications, including modulating and demodulating signals, error correction, and filtering out noise. By processing signals digitally, telecommunications systems can achieve higher data rates, better signal quality, and improved reliability. DSP also plays a crucial role in modern wireless communication systems, where it is used for channel equalization, interference cancellation, and beamforming. In audio processing, DSP is used to enhance sound quality, remove noise, and implement various audio effects such as equalization, compression, and reverberation. DSP algorithms can analyze audio signals in real-time, allowing for adaptive filtering and dynamic adjustments to optimize sound output. This technology is used in a wide range of audio devices, including smartphones, music players, and home theater systems. Image processing is another field where DSP plays a vital role, enabling tasks such as image enhancement, object recognition, and image compression. DSP algorithms can analyze digital images pixel by pixel, performing operations such as filtering, edge detection, and color correction. This technology is used in applications ranging from medical imaging and satellite imagery to security surveillance and digital photography. While DSP offers numerous benefits, it also presents challenges such as algorithm complexity, computational requirements, and trade-offs between accuracy and speed. Designing efficient DSP algorithms requires a deep understanding of signal processing theory, mathematical optimization, andcomputational techniques. Engineers must carefully balance these factors todevelop algorithms that meet the performance requirements of specific applications. In conclusion, digital signal processing is a versatile and powerful technologythat has revolutionized various fields, from telecommunications to audio and image processing. Its ability to process signals in real-time, optimize performance, and improve signal quality makes it an essential tool in modern technology. Despitethe challenges it presents, DSP continues to drive innovation and advancements ina wide range of applications, shaping the way we communicate, listen to music, and interact with digital media.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Proceedings of the 14th International Conference on Electronics, Communications and Computers (CONIELECOMP’04) 0-7695-2074-X/04 $ 20.00 © 2004 IEEE
In here A f BD , f i , is the minimum angle value obtained between the vectors f i put vector is calculated by realizing a magnitude processing which is carried out by any grey level image processing filter. The Generalized Vector Directional Filter Adaptive (GVDFAD) is an adaptive filter that uses the derivative of ordering sequence D ( i ) which corresponds to the ordering criterion. In this case it is usually used the angle between different vectors, and the special procedure of the sequence cutting is applied for the first discontinuity that exceeds some value of W % of the maximum derivative of the sequence. Usually the value of the variable is W 25 [2]. The GVDF filter has to be combined with appropriate magnitude filters to result an only vector output for each pixel. The filters implemented here with GVDF were the Median Filter and the D trimmed mean filter. b) The Adaptive non Parametric Approach Filters determine unknown form of density from a set of the vectors inside of sliding window. Below we present some of these filters. Adaptive Multichannel non Parametric Filter (AMNF) can be written in such a form:
fO f (1) , f ( 2) ,..., f ( k )
^
`
^GVDF > f1, f 2 ,..., f n @`
. (3)
Other filter is the AMNF2, and in this case we use the Median Vector Filter (MVF) to give the reference vectors needed to realize the next filter: § y yl · · ¨ hM K § ¸ ¸ ¨ l n ¨ h ¸ ¸ ¨ l ¹ © VM . (6) x xl ¨ n ¸ ˆ ( y ) AMNF 2 ¨ M § y yl · ¸ l 1 ¸ hl K ¨ ¨ ¨ h ¸¸ l ¹¹ © ©l 1
a
Abstract
Non linear filtering techniques are presented to suppress impulsive noise and preserve fine image details. Directional processing, non parametric approach and order statistics filters have been investigated. The before presented novel algorithm MM-KNN is adapted to color imaging. Different criterions have been used to characterize the performances of color imaging such as impulsive noise suppression, edge and detail preservation. The analyzed algorithms have been implemented on DSP Texas Instruments to obtain the values of processing time when different intensity noise corrupts the color image.
Another scheme presented here is the filtering procedure through non parametric approaches. It is given for filters those use the vector approximation, transferring color image to a vector set. Such a set contains each vector’s direction and magnitude related with pixel chromaticity properties The unknown noise density functional is used, and it is estimated from sample outliers available using non parametric techniques [3]. An important property of order statistics is the robust parameter estimation where median is a one of the best examples of robust estimators. Different filters have been investigated and implemented in this paper such as : Median Filtering [4], D -trimmed mean filter [4], Vector Median Filter [4] and MM-KNN (Median M-Type K-Nearest Neighbor) filter [5] adapted to color imaging.
f BD BVDF > f 1 , f 2 ,..., f n @
, (1)
where the ordering is realized in the following way:
¦ A f
i 1
n
BD
, f i d ¦ A f j , f i , j 1,2,..., n
i 1
n
. (2)
1. Introduction
One of the first approaches in color imaging was the directional processing. These filters use the characteristics which identify the unique vector that represents the color image. Such the characteristics are: direction and magnitude. The filters order the vectors which get during the calculation process agree to direction of each a vector. The processing procedure is separated in “directional processing” and “magnitude processing” where Vector Directional Filters realize the vector direction processing and eliminate the vectors with atypical directions having vector’s angle as ordering criterion. The remaining set is a group of vectors with approximately the same direction in vector space. After that there is carried out to this group of vectors a magnitude processing to obtain an only vector which represents the algorithm output. Such a magnitude processing may be carried out with any grey level image processing filter [1], [2].
2. Filtering Schemes
a) Vector Directional Filters (VDF). The Basic Vector Directional Filter (BVDF) realizes the processing procedure taking into account the pixels as vectors, realizing the directional processing and obtaining the output vector that shows a less deviation of its angles under ordering criterions in respect to the other vectors. In a vector set ^ f i , i 1,2,..., n`, the filtering procedure gives:
相关文档
最新文档