Effective Image Haze Removal Using Dark Channel

合集下载

Single Image Haze Removal

Single Image Haze Removal

Single Image Haze Removal Using Dark Channel PriorKaiming He,Jian Sun,and Xiaoou Tang,Fellow ,IEEEAbstract —In this paper,we propose a simple but effective image prior—dark channel prior to remove haze from a single input image.The dark channel prior is a kind of statistics of outdoor haze-free images.It is based on a key observation—most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color ing this prior with the haze imaging model,we can directly estimate the thickness of the haze and recover a high-quality haze-free image.Results on a variety of hazy images demonstrate the power of the proposed prior.Moreover,a high-quality depth map can also be obtained as a byproduct of haze removal.Index Terms —Dehaze,defog,image restoration,depth estimation.Ç1I NTRODUCTIONIMAGESof outdoor scenes are usually degraded by theturbid medium (e.g.,particles and water droplets)in the atmosphere.Haze,fog,and smoke are such phenomena due to atmospheric absorption and scattering.The irradiance received by the camera from the scene point is attenuated along the line of sight.Furthermore,the incoming light is blended with the airlight [1]—ambient light reflected into the line of sight by atmospheric particles.The degraded images lose contrast and color fidelity,as shown in Fig.1a.Since the amount of scattering depends on the distance of the scene points from the camera,the degradation is spatially variant.Haze removal 1(or dehazing)is highly desired in consumer/computational photography and computer vision applications.First,removing haze can significantly increase the visibility of the scene and correct the color shift caused by the airlight.In general,the haze-free image is more visually pleasing.Second,most computer vision algorithms,from low-level image analysis to high-level object recognition,usually assume that the input image (after radiometric calibration)is the scene radiance.The performance of many vision algorithms (e.g.,feature detection,filtering,and photometric analysis)will inevita-bly suffer from the biased and low-contrast scene st,haze removal can provide depth information and benefit many vision algorithms and advanced imageediting.Haze or fog can be a useful depth clue for scene understanding.A bad hazy image can be put to good use.However,haze removal is a challenging problem because the haze is dependent on the unknown depth.The problem is underconstrained if the input is only a single hazy image.Therefore,many methods have been proposed by using multiple images or additional information.Polarization-based methods [3],[4]remove the haze effect through two or more images taken with different degrees of polarization.In [5],[6],[7],more constraints are obtained from multiple images of the same scene under different weather conditions.Depth-based methods [8],[9]require some depth informa-tion from user inputs or known 3D models.Recently,single image haze removal has made significant progresses [10],[11].The success of these methods lies on using stronger priors or assumptions.Tan [11]observes that a haze-free image must have higher contrast compared with the input hazy image and he removes haze by maximizing the local contrast of the restored image.The results are visually compelling but may not be physically valid.Fattal [10]estimates the albedo of the scene and the medium transmission under the assumption that the transmission and the surface shading are locally uncorrelated.This approach is physically sound and can produce impressive results.However,it cannot handle heavily hazy images well and may fail in the cases where the assumption is broken.In this paper,we propose a novel prior—dark channel prior —for single image haze removal.The dark channel prior is based on the statistics of outdoor haze-free images.We find that,in most of the local regions which do not cover the sky,some pixels (called dark pixels )very often have very low intensity in at least one color (RGB)channel.In hazy images,the intensity of these dark pixels in that channel is mainly contributed by the airlight.Therefore,these dark pixels can directly provide an accurate estimation of the haze bining a haze imaging model and a soft matting interpolation method,we can recover a high-quality haze-free image and produce a good depth map.Our approach is physically valid and is able to handle distant objects in heavily hazy images.We do not rely on.K.He and X.Tang are with the Department of Information Engineering,The Chinese University of Hong Kong,Shatin,N.T.,Hong Kong,China.E-mail:{hkm007,xtang}@.hk..J.Sun is with Microsoft Research Asia,F5,#49,Zhichun Road,Haldian District,Beijing,China.E-mail:jiansun@.Manuscript received 17Dec.2009;revised 24June 2010;accepted 26June 2010;published online 31Aug.2010.Recommended for acceptance by S.B.Kang.For information on obtaining reprints of this article,please send e-mail to:tpami@,and reference IEEECS Log Number TPAMISI-2009-12-0832.Digital Object Identifier no.10.1109/TPAMI.2010.168.1.Haze,fog,and smoke differ mainly in the material,size,shape,and concentration of the atmospheric particles.See [2]for more details.In this paper,we do not distinguish these similar phenomena and use the term haze removal for simplicity.0162-8828/11/$26.00ß2011IEEEPublished by the IEEE Computer Societysignificant variance of transmission or surface shading.The result contains few halo artifacts.Like any approach using a strong assumption,our approach also has its own limitation.The dark channel prior may be invalid when the scene object is inherently similar to the airlight(e.g.,snowy ground or a white wall) over a large local region and no shadow is cast on it. Although our approach works well for most outdoor hazy images,it may fail on some extreme cases.Fortunately,in such situations haze removal is not critical since haze is rarely visible.We believe that developing novel priors from different directions and combining them together will further advance the state of the art.2B ACKGROUNDIn computer vision and computer graphics,the model widely used to describe the formation of a hazy image is[2], [5],[10],[11]:IðxÞ¼JðxÞtðxÞþAð1ÀtðxÞÞ;ð1Þwhere I is the observed intensity,J is the scene radiance, A is the global atmospheric light,and t is the medium transmission describing the portion of the light that is not scattered and reaches the camera.The goal of haze removal is to recover J,A,and t from I.For an N-pixel color image I,there are3N constraints and4Nþ3 unknowns.This makes the problem of haze removal inherently ambiguous.In(1),the first term JðxÞtðxÞon the right-hand side is called direct attenuation[11],and the second term Að1ÀtðxÞÞis called airlight[1],[11].The direct attenuation describes the scene radiance and its decay in the medium, and the airlight results from previously scattered light and leads to the shift of the scene colors.While the direct attenuation is a multiplicative distortion of the scene radiance,the airlight is an additive one.When the atmosphere is homogenous,the transmission t can be expressed astðxÞ¼eÀ dðxÞ;ð2Þwhere is the scattering coefficient of the atmosphere and d is the scene depth.This equation indicates that the scene radiance is attenuated exponentially with the depth.If we can recover the transmission,we can also recover the depth up to an unknown scale.Geometrically,the haze imaging equation(1)means that in RGB color space,the vectors A,IðxÞ,and JðxÞare coplanar and their end points are collinear(see Fig.2a).The transmission t is the ratio of two line segments:tðxÞ¼k AÀIðxÞkk AÀJðxÞk¼A cÀI cðxÞA cÀJ cðxÞ;ð3Þwhere c2f r;g;b g is the color channel index.Fig.2.(a)Haze imaging model.(b)Constant albedo model used in Fattal’s work[10].Fig.1.Haze removal using a single image.(a)Input hazy image.(b)Image after haze removal by our approach.(c)Our recovered depth map.Based on this model,Tan’s method [11]focuses on enhancing the visibility of the image.For a patch with uniform transmission t ,the visibility (sum of gradient)of the input image is reduced by the haze since t <1:X xkr I ðx Þk ¼t X xkr J ðx Þk <Xxkr J ðx Þk :ð4ÞThe transmission t in a local patch is estimated bymaximizing the visibility of the patch under a constraint that the intensity of J ðx Þis less than the intensity of A .An MRF model is used to further regularize the result.This approach is able to greatly unveil details and structures from hazy images.However,the output images usually tend to have larger saturation values because this method focuses solely on the enhancement of the visibility and does not intend to physically recover the scene radiance.Besides,the result may contain halo effects near the depth discontinuities.In [10],Fattal proposes an approach based on Indepen-dent Component Analysis (ICA).First,the albedo of a local patch is assumed to be a constant vector R .Thus,all vectors J ðx Þin the patch have the same direction R ,as shown in Fig.2b.Second,by assuming that the statistics of the surface shading k J ðx Þk and the transmission t ðx Þare independent in the patch,the direction of R is estimated by ICA.Finally,an MRF model guided by the input color image is applied to extrapolate the solution to the whole image.This approach is physics-based and can produce a natural haze-free image together with a good depth map.But,due to the statistical independence assumption,this approach requires that the independent components vary significantly.Any lack of variation or low signal-to-noise ratio (often in dense haze region)will make the statistics unreliable.Moreover,as the statistic is based on color information,it is invalid for gray-scale images and it is difficult to handle dense haze that is colorless.In the next section,we present a new prior—dark channel prior—to estimate the transmission directly from an outdoor hazy image.3D ARK C HANNEL P RIORThe dark channel prior is based on the following observa-tion on outdoor haze-free images:In most of the nonsky patches,at least one color channel has some pixels whoseintensity are very low and close to zero.Equivalently,theminimum intensity in such a patch is close to zero.To formally describe this observation,we first define the concept of a dark channel .For an arbitrary image J ,its dark channel J dark is given byJdark ðx Þ¼min y 2 ðx Þmin c 2f r;g;b gJ cðy Þ;ð5Þwhere J c is a color channel of J and ðx Þis a local patch centered at x .A dark channel is the outcome of two minimum operators:min c 2f r;g;b g is performed on each pixel (Fig.3b),and min y 2 ðx Þis a minimum filter (Fig.3c).The minimum operators are commutative.Using the concept of a dark channel,our observation says that if J is an outdoor haze-free image,except for the sky region,the intensity of J ’s dark channel is low and tends to be zero:J dark !0:ð6ÞWe call this observation dark channel prior .The low intensity in the dark channel is mainly due to three factors:a)shadows, e.g.,the shadows of cars,buildings,and the inside of windows in cityscape images,or the shadows of leaves,trees,and rocks in landscape images;b)colorful objects or surfaces,e.g.,any object with low reflectance in any color channel (for example,green grass/tree/plant,red or yellow flower/leaf,and blue water surface)will result in low values in the dark channel;c)dark objects or surfaces,e.g.,dark tree trunks and stones.As the natural outdoor images are usually colorful and full of shadows,the dark channels of these images are really dark!To verify how good the dark channel prior is,we collect an outdoor image set from and several other image search engines using 150most popular tags annotated by the Flickr users.Since haze usually occurs in outdoor landscape and cityscape scenes,we manually pick out the haze-free landscape and cityscape ones from the data set.Besides,we only focus on daytime images.Among them,we randomly select 5,000images and manually cut out the sky regions.The images are resized so that the maximum of width and height is 500pixels and their dark channels are computed using a patch size 15Â15.Fig.4shows several outdoor images and the corresponding dark channels.HE ETAL.:SINGLE IMAGE HAZE REMOVAL USING DARK CHANNEL PRIOR 2343Fig.3.Calculation of a dark channel.(a)An arbitrary image J .(b)For each pixel,we calculate the minimum of its (r,g,b)values.(c)A minimum filter is performed on (b).This is the dark channel of J .The image size is 800Â551,and the patch size of is 15Â15.Fig.5a is the intensity histogram over all 5,000dark channels and Fig.5b is the corresponding cumulative distribution.We can see that about 75percent of the pixels in the dark channels have zero values,and the intensity of 90percent of the pixels is below 25.This statistic gives very strong support to our dark channel prior.We also compute the average intensity of each dark channel and plot the corresponding histogram in Fig.5c.Again,most dark channels have very low average intensity,showing that only a small portion of outdoor haze-free images deviate from our prior.Due to the additive airlight,a hazy image is brighter than its haze-free version where the transmission t is low.So,the dark channel of a hazy image will have higher intensity in regions with denser haze (see the right-hand side of Fig.4).Visually,the intensity of the dark channel is a rough approximation of the thickness of the haze.In the next section,we will use this property to estimate the transmis-sion and the atmospheric light.Note that we neglect the sky regions because the dark channel of a haze-free image may have high intensity here.Fortunately,we can gracefully handle the sky regions by using the haze imaging model (1)and our prior together.It is not necessary to cut out the sky regions explicitly.We discuss this issue in Section 4.1.Our dark channel prior is partially inspired by the well-known dark-object subtraction technique [12]widely used in multispectral remote sensing systems.In [12],spatially homogeneous haze is removed by subtracting a constant value corresponding to the darkest object in the scene.We generalize this idea and propose a novel prior for natural image dehazing.2344IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.33,NO.12,DECEMBER2011Fig.4.(a)Example images in our haze-free image database.(b)The corresponding dark channels.(c)A hazy image and its darkchannel.Fig.5.Statistics of the dark channels.(a)Histogram of the intensity of the pixels in all of the 5,000dark channels (each bin stands for 16intensity levels).(b)Cumulative distribution.(c)Histogram of the average intensity of each dark channel.4H AZE R EMOVAL U SING D ARK C HANNEL P RIOR 4.1Estimating the TransmissionWe assume that the atmospheric light A is given.An automatic method to estimate A is proposed in Section4.3. We first normalize the haze imaging equation(1)by A:I cðxÞ¼tðxÞJ cðxÞþ1ÀtðxÞ:ð7ÞNote that we normalize each color channel independently.We further assume that the transmission in a local patch ðxÞis constant.We denote this transmission as~tðxÞ.Then, we calculate the dark channel on both sides of(7). Equivalently,we put the minimum operators on both sides:min y2 ðxÞmincI cðyÞA¼~tðxÞminy2 ðxÞmincJ cðyÞAþ1À~tðxÞ:ð8ÞSince~tðxÞis a constant in the patch,it can be put on the outside of the min operators.As the scene radiance J is a haze-free image,the dark channel of J is close to zero due to the dark channel prior:J darkðxÞ¼miny2 ðxÞÀmincJ cðyÞÁ¼0:ð9ÞAs A c is always positive,this leads tomin y2 ðxÞmincJ cðyÞA c¼0:ð10ÞPutting(10)into(8),we can eliminate the multiplicative term and estimate the transmission~t simply by~tðxÞ¼1Àminy2 ðxÞmincI cðyÞA:ð11ÞIn fact,min y2 ðxÞðmin c I cðyÞA c Þis the dark channel of thenormalized hazy image I cðyÞA c .It directly provides theestimation of the transmission.As we mentioned before,the dark channel prior is not a good prior for the sky regions.Fortunately,the color of the sky in a hazy image I is usually very similar to the atmospheric light A.So,in the sky region,we havemin y2 ðxÞmincI cðyÞA!1;and(11)gives~tðxÞ!0.Since the sky is infinitely distant and its transmission is indeed close to zero(see(2)),(11) gracefully handles both sky and nonsky regions.We do not need to separate the sky regions beforehand.In practice,even on clear days the atmosphere is not absolutely free of any particle.So the haze still exists when we look at distant objects.Moreover,the presence of haze is a fundamental cue for human to perceive depth[13],[14]. This phenomenon is called aerial perspective.If we remove the haze thoroughly,the image may seem unnatural and we may lose the feeling of depth.So,we can optionally keep a very small amount of haze for the distant objects by introducing a constant parameter!(0<!1)into(11):~tðxÞ¼1À!miny2 ðxÞmincI cðyÞA:ð12ÞThe nice property of this modification is that we adaptively keep more haze for the distant objects.The value of!is application based.We fix it to0.95for all results reported in this paper.In the derivation of(11),the dark channel prior is essential for eliminating the multiplicative term(direct transmission)in the haze imaging model(1).Only the additive term(airlight)is left.This strategy is completely different from previous single image haze removal methods [10],[11],which rely heavily on the multiplicative term. These methods are driven by the fact that the multiplicative term changes the image contrast[11]and the color variance [10].On the contrary,we notice that the additive term changes the intensity of the local dark pixels.With the help of the dark channel prior,the multiplicative term is discarded and the additive term is sufficient to estimate the transmission.We can further generalize(1)byIðxÞ¼JðxÞt1ðxÞþAð1Àt2ðxÞÞ;ð13Þwhere t1and t2are not necessarily the ing the method for deriving(11),we can estimate t2and thus separate the additive term.The problem is reduced to a multiplicative form(JðxÞt1),and other constraints or priors can be used to further disentangle this term.In the literature of human vision research[15],the additive term is called a veiling luminance,and(13)can be used to describe a scene seen through a veil or glare of highlights.Fig.6b shows the estimated transmission maps using (12).Fig.6d shows the corresponding recovered images.As we can see,the dark channel prior is effective on recovering the vivid colors and unveiling low contrast objects.The transmission maps are reasonable.The main problems are some halos and block artifacts.This is because the transmission is not always constant in a patch.In the next section,we propose a soft matting method to refine the transmission maps.4.2Soft MattingWe notice that the haze imaging equation(1)has a similar form as the image matting equation:I¼F þBð1À Þ;ð14Þwhere F and B are foreground and background colors, respectively,and is the foreground opacity.A transmis-sion map in the haze imaging equation is exactly an alpha map.Therefore,we can apply a closed-form framework of matting[16]to refine the transmission.Denote the refined transmission map by tðxÞ.Rewriting tðxÞand~tðxÞin their vector forms as t and~t,we minimize the following cost function:EðtÞ¼t T L tþ ðtÀ~tÞTðtÀ~tÞ:ð15ÞHere,the first term is a smoothness term and the second term is a data term with a weight .The matrix L is called the matting Laplacian matrix[16].Itsði;jÞelement is defined asHE ET AL.:SINGLE IMAGE HAZE REMOVAL USING DARK CHANNEL PRIOR2345Xk jði;j Þ2w kij À1j w k j 1þðI i À k ÞT Æk þ"j w k jU 3 À1ðI j À k Þ !!;ð16Þwhere I i and I j are the colors of the input image I at pixels iand j , ij is the Kronecker delta, k and Æk are the mean and covariance matrix of the colors in window w k ,U 3is a 3Â3identity matrix,"is a regularizing parameter,and j w k j is the number of pixels in the window w k .The optimal t can be obtained by solving the following sparse linear system:ðL þ U Þt ¼ ~t ;ð17Þwhere U is an identity matrix of the same size as L .We set a small (10À4in the experiments)so that t is softly constrained by ~t .The derivation of the matting Laplacian matrix in [16]is based on a color line assumption :The foreground/back-ground colors in a small local patch lie on a single line in the RGB color space.The color line assumption is also valid in the problem of haze removal.First,the scene radiance J is a natural image.According to [16],[17],the color line model holds for natural images.Second,the atmospheric light A is a constant,which of course satisfies the assumption.Therefore,it is valid to use the matting Laplacian matrix as the smoothness term in the haze removal problem.The closed-form matting framework has also been applied by Hsu et al.[18]to deal with the spatially variant white balance problem.In [16],[18],the constraint is known in some sparse regions,and this framework is mainly used to extrapolate the values into the unknown regions.In our application,the coarse constraint ~t has already filled the whole image.We consider this constraint as soft,and use the matting framework to refine the map.After solving the linear system (17),we perform a bilateral filter [19]on t to smooth its small scale textures.Fig.6c shows the refined results using Fig.6b as theconstraint.As we can see,the halos and block artifacts are suppressed.The refined transmission maps manage to capture the sharp edge discontinuities and outline the profile of the objects.4.3Estimating the Atmospheric LightWe have been assuming that the atmospheric light A is known.In this section,we propose a method to estimate A .In the previous works,the color of the most haze-opaque region is used as A [11]or as A ’s initial guess [10].However,little attention has been paid to the detection of the “most haze-opaque”region.In Tan’s work [11],the brightest pixels in the hazy image are considered to be the most haze-opaque.This is true only when the weather is overcast and the sunlight can be ignored.In this case,the atmospheric light is the only illumination source of the scene.So,the scene radiance of each color channel is given byJ ðx Þ¼R ðx ÞA;ð18Þwhere R 1is the reflectance of the scene points.The haze imaging equation (1)can be written asI ðx Þ¼R ðx ÞAt ðx Þþð1Àt ðx ÞÞA A:ð19ÞWhen pixels at infinite distance (t %0)exist in the image,the brightest I is the most haze-opaque and it approxi-mately equals A .Unfortunately,in practice we can rarely ignore the sunlight.Considering the sunlight S ,we modify (18)byJ ðx Þ¼R ðx ÞðS þA Þ;ð20Þand (19)byI ðx Þ¼R ðx ÞSt ðx ÞþR ðx ÞAt ðx Þþð1Àt ðx ÞÞA:ð21Þ2346IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.33,NO.12,DECEMBER2011Fig.6.Haze removal.(a)Input hazy images.(b)Estimated transmission maps before soft matting.(c)Refined transmission maps after soft matting.(d),(e)Recovered images using (b)and (c),respectively.In this situation,the brightest pixel of the whole image can be brighter than the atmospheric light.They can be on a white car or a white building (Figs.7d and 7e).As discussed in Section 3,the dark channel of a hazy image approximates the haze denseness (see Fig.7b).So we can use the dark channel to detect the most haze-opaque region and improve the atmospheric light estimation.We first pick the top 0.1percent brightest pixels in the dark channel.These pixels are usually most haze-opaque (bounded by yellow lines in Fig.7b).Among these pixels,the pixels with highest intensity in the input image I are selected as the atmospheric light.These pixels are in the red rectangle in Fig.7a.Note that these pixels may not be brightest ones in the whole input image.This method works well even when pixels at infinite distance do not exist in the image.In Fig.8b,our method manages to detect the most haze-opaque regions.However,t is not close to zero here,so the colors of these regions may be different from A .Fortunately,t is small in these most haze-opaque regions,so the influence of sunlight is weak (see (21)).Therefore,these regions can still provide a good approximation of A .The haze removal result of this image is shown in Fig.8c.This simple method based on the dark channel prior is more robust than the “brightest pixel”method.We use it to automatically estimate the atmospheric lights for all images shown in this paper.HE ETAL.:SINGLE IMAGE HAZE REMOVAL USING DARK CHANNEL PRIOR 2347Fig.7.Estimating the atmospheric light.(a)Input image.(b)Dark channel and the most haze-opaque region.(c)The patch from where our method automatically obtains the atmospheric light.(d),(e)Two patches that contain pixels brighter than the atmospheric light.Fig.8.(a)Input image.(b)Dark channel.The red pixels are the most haze-opaque regions detected by our method.(c)Our haze removal result.(d)Fattal’s haze removal result [10].4.4Recovering the Scene RadianceWith the atmospheric light and the transmission map,we can recover the scene radiance according to (1).But the direct attenuation term J ðx Þt ðx Þcan be very close to zero when the transmission t ðx Þis close to zero.The directly recovered scene radiance J is prone to noise.Therefore,we restrict the transmission t ðx Þby a lower bound t 0,i.e.,we preserve a small amount of haze in very dense haze regions.The final scene radiance J ðx Þis recovered byJ ðx Þ¼I ðx ÞÀAmax ðt ðx Þ;t 0ÞþA :ð22ÞA typical value of t 0is 0.1.Since the scene radiance is usually not as bright as the atmospheric light,the image after haze removal looks dim.So we increase the exposure of J ðx Þfor display.Some final recovered images are shown in Fig.6e.4.5Patch SizeA key parameter in our algorithm is the patch size in (11).On one hand,the dark channel prior becomes better for a larger patch size because the probability that a patch contains a dark pixel is increased.We can see this in Fig.9:the larger the patch size,the darker the dark channel.Consequently,(9)is less accurate for a small patch,and the recovered scene radiance is oversaturated (Fig.10b).On the other hand,the assumption that the transmission is constant in a patch becomes less appropriate.If the patch size is too large,halos near depth edges may become stronger (Fig.10c).Fig.11shows the haze removal results using different patch sizes.The image sizes are 600Â400.In Fig.11b,the patch size is 3Â3.The colors of some grayish surfaces look oversaturated (see the buildings in the first row and the subimages in the second and the third rows).In Figs.11c and 11d,the patch sizes are 15Â15and 30Â30,respectively.The results appear more natural than those in Fig.11b.This shows that our method works well for sufficiently large patch sizes.The soft matting technique is able to reduce the artifacts introduced by large patches.We also notice that the images in Fig.11d seem to be slightly hazier than those in Fig.11c (typically in distant regions),but the differences are small.In the remainder of this paper,we use a patch size of 15Â15.5E XPERIMENTAL R ESULTSIn our experiments,we compute the minimum filter by van Herk’s fast algorithm [20],whose complexity is linear to the image size.In the soft matting step,we use the Precondi-tioned Conjugate Gradient (PCG)algorithm as our solver.It takes about 10-20seconds to process a 600Â400image on a PC with a 3.0GHz Intel Pentium 4Processor.Figs.1and 12show the recovered images and the depth maps.The depth maps are computed according to (2)and are up to an unknown scaling parameter .The atmo-spheric lights in these images are automatically estimated using the method described in Section 4.3(indicated by the red rectangles in Fig.12).As can be seen,our approach can unveil the details and recover vivid colors even in heavily2348IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.33,NO.12,DECEMBER2011Fig.9.A haze-free image (600Â400)and its dark channels using 3Â3and 15Â15patches,respectively.Fig.10.(a)A 600Â400hazy image and the recovered scene radiance using (b)3Â3and (c)15Â15patches,respectively (without soft matting).The recovered scene radiance is oversaturated for a small patch size,while it contains apparent halos for a large patch size.。

改进阈值函数的指纹图像去噪方法

改进阈值函数的指纹图像去噪方法

改进阈值函数的指纹图像去噪方法作者:张兆茹,邓彩霞,岳欣华来源:《哈尔滨理工大学学报》2022年第01期摘要:为了使指纹识别技术更加准确和识别效率更高,提出了改进的阈值函数对指纹图像进行去噪。

首先针对指纹图像的特点,构造了一个改进的阈值函数,该阈值函数与传统软、硬阈值函数及某些现有改进阈值函数相比,具有很好的可调性,并且是处处可导的,其更好地逼近软阈值函数,且在阈值点处的图像更加光滑,在对图像去噪时能够更多的保留其上的真实信息,同时有效的滤除噪声。

仿真实验表明,改进的阈值函數处理后的图像峰值信噪比和结构相似度高、均方根误差和扭曲程度小且相关系数大,能够更加接近于原图像,因此改进的阈值函数具有很好的应用价值。

关键词:阈值函数;指纹图像去噪;峰值信噪比;结构相似度;均方根误差DOI:10.15938/j.jhust.2022.01.008中图分类号: TN911.73 文献标志码: A 文章编号: 1007-2683(2022)01-0055-06Fingerprint Image Denoising Method Basedon Improved Threshold FunctionZHANG Zhaoru,DENG Caixia,YUE Xinhua(School of Sciences, Harbin University of Science and Technology, Harbin 150080,China)Abstract:In order to make fingerprint identification technology more accurate and efficient,an improved threshold function is proposed to denoise fingerprint images. First, according to the characteristics of fingerprint images, an improved threshold function is constructed. Compared with traditional soft and hard threshold functions and some existing improved threshold functions, this threshold function has good adjustability, and is adjustable everywhere. It is better to approximate the soft threshold function, and the image at the threshold point is smoother. When denoising the image, it can retain more real information on it and effectively filter out the noise. Simulation experiments show that the improved threshold function has high peak signaltonoise ratio and structure similarity, small root mean square error and distortion degree, and better correlation coefficient,and can be closer to the original image. Therefore, the improved threshold function has good application value.Keywords:threshold function; fingerprint image denoising; peak signaltonoise ratio; structural similarity; rootmeansquare0引言指纹图像以其具有唯一性、稳定性、可采集性、与主体永可分离性等优点[1-2],已成为身份识别的有效手段,被广泛应用于刑事侦查和安全验证中。

英语作文照相机

英语作文照相机

The camera,a device that captures moments in time,has evolved significantly over the years.From the early days of photography to the digital age,cameras have become an integral part of our lives,allowing us to document and share our experiences with others.History of the CameraThe cameras history dates back to the19th century with the invention of the first practical photographic process,the daguerreotype,by Louis Daguerre in1839.This was followed by the introduction of the calotype by William Henry Fox Talbot,which allowed for multiple copies of a negative to be made.Types of CamerasThere are several types of cameras available today,each with its unique features and uses:1.Digital Cameras:These are the most common type of camera today.They use digital technology to capture and store images,allowing for easy editing and sharing.2.Film Cameras:These cameras use film to capture images.They are still popular among photography enthusiasts for their unique aesthetic and the tactile experience of developing film.3.DSLR Cameras:Digital SingleLens Reflex cameras are popular among professional photographers for their highquality images and advanced features.4.Mirrorless Cameras:These cameras,like DSLRs,offer highquality images but without the mirror mechanism,making them lighter and more compact.5.Instant Cameras:Instant cameras,such as the Polaroid,produce a physical print of the photo immediately after it is taken,offering a nostalgic and tangible experience.Technological AdvancementsWith the advent of digital technology,cameras have become more sophisticated,offering features such as:HighResolution Sensors:These capture more detail in images,allowing for larger prints and better cropping flexibility.Image Stabilization:This feature reduces the effect of camera shake,resulting in sharper images.Autofocus Systems:Modern cameras have advanced autofocus systems that can track moving subjects and adjust focus quickly.Builtin Editing Tools:Many cameras come with builtin editing tools,allowingphotographers to adjust settings like exposure,contrast,and saturation directly on the camera.Photography TechniquesUnderstanding different photography techniques can enhance the quality of photos taken with a camera:Composition:Techniques like the rule of thirds,leading lines,and framing can make a photo more visually appealing.Lighting:Understanding how to use natural and artificial light can dramatically affect the mood and quality of a photo.Depth of Field:Controlling the depth of field can create a sense of depth or isolate a subject from the background.The Impact of Cameras on SocietyCameras have had a profound impact on society,influencing how we remember events, communicate,and express ourselves creatively.They have also played a crucial role in journalism,allowing for the documentation of historical moments and the sharing of stories from around the world.In conclusion,the camera is more than just a tool it is a medium through which we can capture and share our world.As technology continues to advance,the possibilities for creative expression through photography will only continue to expand.。

基于深度学习的图像去噪技术优化研究

基于深度学习的图像去噪技术优化研究

基于深度学习的图像去噪技术优化研究IntroductionWith the increasing use of digital images in various fields including healthcare, security, telecommunication, and robotics, the task of image denoising has become essential. Image denoising refers to the process of removing the noise component from an image to obtain a clear and enhanced version. One of the most popular approaches to image denoising is the use of deep learning techniques. This article explores the optimization of image denoising techniques using deep learning.Image Denoising TechniquesThere are various image denoising techniques that have been developed over the years. These techniques range from traditional statistical approaches to more advanced deep learning methods. Traditional methods include median filtering, Wiener filtering, and wavelet-based approaches. However, these methods are not effective in removing complex noise patterns such as non-uniform noise and they tend to blur image details.Deep Learning techniques for Image DenoisingRecently, deep learning techniques have emerged as a powerful approach for image denoising due to their ability to learn complex patterns in data. Deep learning methods for image denoising can be broadly classified as unsupervised, supervised, and semi-supervised.Unsupervised learning methods utilize autoencoder models to learn the image representation without the use of labeled data. Whereas, supervised learning involves training the network with labeled data. Semi-supervised techniques combine supervised and unsupervised learning methods.Optimization of Deep Learning Techniques for Image DenoisingOne of the most challenging aspects of using deep learning methods for image denoising is optimizing the network to achieve maximum performance. Optimization involves fine-tuning the hyperparameters of the network including learning rate, batch size, and number of hidden layers. The choice of hyperparameters is a critical factor that determines the success of the network. Optimal hyperparameters can help improve the accuracy of the network and reduce the error margin.In recent years, several techniques have been proposed to optimize the performance of deep learning models for image denoising. These techniques include:1. Incorporation of data augmentation techniques such as rotation, flipping, and zooming to increase the size of the training dataset.2. The use of ensemble techniques to combine the results of multiple models to enhance the prediction accuracy.3. Fine-tuning of pre-trained models to improve the accuracy of the network.4. The use of non-linear activation functions such as Rectified Linear Units (ReLU) and leaky ReLU to improve the convergence speed of the network.5. The implementation of regularization techniques such as dropout to reduce overfitting of the network.Advantages of Deep Learning Techniques for Image DenoisingDeep learning techniques have several advantages over traditional methods for image denoising. These advantages include:1. Ability to learn complex representations of data.2. High accuracy in handling complex noise patterns.3. Reduced computation time due to parallel processing of data.4. Better generalization ability compared to traditional methods.ConclusionIn conclusion, deep learning techniques provide a powerful approach for image denoising. The optimization of these techniques is necessary for improving the accuracy of the network and reducing the error margin. Several optimization techniques have been proposed to enhance the performance of deep learning models for image denoising. These techniques include data augmentation, ensemble techniques, fine-tuning of pre-trained models, and regularization techniques. Withthese techniques, deep learning methods are becoming the preferred approach for image denoising in various applications.。

多模医学图像预处理和融合方法

多模医学图像预处理和融合方法

法规和隐私
医学图像的隐私保护和法规限制也是 一大挑战。
未来发展方向和趋势
深度学习技术的进一步应 用
高维医学图像分析
个性化医疗
多学科交叉合作
随着深度学习技术的不断发展,其在医学 图像处理中的应用也将更加广泛。
随着多模医学图像数据的不断增加,高维 医学图像分析将成为未来的一个重要研究 方向。
基于大数据和人工智能的个性化医疗将进 一步发展,为多模医学图像预处理和融合 方法提供更多的应用场景。
01
多模医学图像预处理和融 合方法的挑战和未来发展
面临的挑战
技术复杂性
多模医学图像预处理和融合方法涉及 的技术非常复杂,需要深入的专业知 识和理解。
数据异质性
不同模态的医学图像在空间分辨率、 对比度等方面可能存在差异,增加了 融合的难度。
噪声和干扰
医学图像中可能存在噪声和干扰,如 设备误差、运动伪影等,对融合结果 产生负面影响。
利用不同模态图像的拉普拉斯金字塔进行融合,以突出边缘 和细节信息。
基于区域或块的融合方法
区域相似性融合法
将图像分割成若干区域,根据区域相似度进行融合。
基于块的融合法
将图像分块,根据块内的像素信息进行融合。
基于模型的融合方法
贝叶斯框架法
利用贝叶斯定理建立模型,对多模态图像进行融合。
神经网络法
利用深度学习等神经网络技术对多模态图像进行融合。
多学科交叉合作将进一步加强,包括医学 、计算机科学、物理学、生物医学工程等 学科的交叉融合,推动多模医学图像预处 理和融合方法的发展。
01
结论
研究成果总结
01
医学图像预处理
在医学图像预处理方面,研究者们提出了多种方法,包括去噪、图像增

Single Image Haze Removal Using Dark Channel Prior中文版剖析

Single Image Haze Removal Using Dark Channel Prior中文版剖析

Single Image Haze Removal Using Dark Channel PriorKaiming He, Jian Sun, Xiaoou TangThe Chinese University of Hong Kong Microsoft Research Asia基于暗原色先验的单一图像去雾方法何恺明,孙剑,汤晓鸥香港中文大学微软亚洲研究院摘要在这篇论文当中,我们提出了一种简单但是有效的图像先验规律——暗原色先验来为单一输入图像去雾。

暗原色先验来自对户外无雾图像数据库的统计规律,它基于经观察得到的这么一个关键事实——绝大多数的户外无雾图像的每个局部区域都存在某些至少一个颜色通道的强度值很低的像素。

利用这个先验建立的去雾模型,我们可直接估算雾的浓度并且复原得到高质量的去除雾干扰的图像。

对户外各种不同的带雾图像的处理结果表明了dark channel prior的巨大作用。

同时,作为去雾过程中的副产品,我们还可获得该图像高质量的深度图。

1.引言户外景物的图像通常会因为大气中的混浊的媒介(比如分子,水滴等)而降质,雾、霭、蒸气都因大气吸收或散射造成此类现象。

照相机接收到景物反射过来的光线经过了衰减。

此外,得到的光线还混合有大气光(经大气分子反射的周围环境的光线)。

降质的图像的对比度和颜色的保真度有所下降,如图1所示。

由于大气散射的程度和景点到照相机的距离有关,图像降质是随着空间变化的。

在消费/计算摄影业和计算机视觉领域,图像去雾有着广泛的需求。

首先,去雾能显著地提高景象的清晰度并且改正因空气而带来的色移。

一般的,去除雾干扰的图片看起来要更加舒服。

其次,大多数的计算机视觉算法,从低级别的图像分析,到高级别的目标识别,一般会假定输入图像即景物的原始光线会聚所成。

视觉算法(例如特征检测、滤波、光度分析等)的实现会不可避免地因为偏光、低对比度图像而不理想。

再次,去雾可产生图像的深度信息,有助于视觉算法和高级的图像编辑。

imagesharp的用法 -回复

imagesharp的用法 -回复

imagesharp的用法-回复首先,在讨论Imagesharp的使用方法之前,让我们先了解一下什么是Imagesharp。

Imagesharp是一个基于.NET平台的图像处理库,它提供了丰富的图像处理功能,可以用于在.NET应用程序中对图片进行编辑、调整、转换等操作。

它支持多种图像格式,并具有良好的性能和易用性。

接下来,我们将一步一步地回答"Imagesharp的用法"这个主题。

第一步,安装Imagesharp库。

在使用Imagesharp之前,我们需要首先将其安装到我们的.NET项目中。

可以通过NuGet包管理器来进行安装,也可以手动下载库文件并引用到项目中。

使用NuGet包管理器安装Imagesharp的方法如下:1. 打开Visual Studio,进入你的.NET项目。

2. 在Visual Studio菜单中选择"工具" -> "NuGet包管理器" -> "管理解决方案的NuGet包"。

3. 在右上角的搜索框中输入"Imagesharp",然后点击"Install"按钮安装最新版本。

手动下载并引用Imagesharp的方法如下:1. 在你的浏览器中搜索"Imagesharp",找到官方网站或GitHub仓库。

2. 在官方网站或GitHub仓库中找到下载链接,下载最新版本的库文件。

3. 将下载的库文件解压到你的项目文件夹中。

4. 在Visual Studio中打开你的项目,右键点击项目名称,选择"添加" -> "现有项"。

5. 在文件选择对话框中,选择解压后的库文件,然后点击"添加"按钮。

第二步,图像的加载与保存。

在使用Imagesharp处理图像之前,我们需要先加载图像文件,并可以选择将处理后的图像保存到指定的位置。

图像增强技术外文翻译参考文献综述

图像增强技术外文翻译参考文献综述

图像增强技术外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Hybrid Genetic Algorithm Based Image EnhancementTechnologyAbstract—in image enhancement, Tubbs proposed a normalized incomplete Beta function to represent several kinds of commonly used non-linear transform functions to do the research on image enhancement. But how to define the coefficients of the Beta function is still a problem. We proposed a Hybrid Genetic Algorithm which combines the Differential Evolution to the Genetic Algorithm in the image enhancement process and utilize the quickly searching ability of the algorithm to carry out the adaptive mutation and searches. Finally we use the Simulation experiment to prove the effectiveness of the method.Keywords- Image enhancement; Hybrid Genetic Algorithm; adaptive enhancementI. INTRODUCTIONIn the image formation, transfer or conversion process, due to other objective factors such as system noise, inadequate or excessive exposure, relative motion and so the impact will get the image often a difference between the original image (referred to as degraded or degraded) Degraded image is usually blurred or after the extraction of information through the machine to reduce or even wrong, it must take some measures for its improvement.Image enhancement technology is proposed in this sense, and the purpose is to improve the image quality. Fuzzy Image Enhancement situation according to the image using a variety of special technical highlights some of the information in the image, reduce or eliminate the irrelevant information, to emphasize the image of the whole or the purpose of local features. Image enhancement method is still no unified theory, image enhancement techniques can be divided into three categories: point operations, and spatial frequency enhancement methods Enhancement Act. This paper presents an automatic adjustment according to the image characteristics of adaptive image enhancement method that called hybrid genetic algorithm. It combines the differential evolution algorithm of adaptive search capabilities, automatically determines the transformation function of the parameter values in order to achieve adaptive image enhancement.II. IMAGE ENHANCEMENT TECHNOLOGYImage enhancement refers to some features of the image, such as contour, contrast, emphasis or highlight edges, etc., in order to facilitate detection or further analysis and processing. Enhancements will not increase the information in the image data, but will choose the appropriate features of the expansion of dynamic range, making these features more easily detected or identified, for the detection and treatment follow-up analysis and lay a good foundation.Image enhancement method consists of point operations, spatial filtering, and frequency domain filtering categories. Point operations, including contrast stretching, histogram modeling, and limiting noise and image subtraction techniques. Spatial filter including low-pass filtering, median filtering, high pass filter (image sharpening). Frequency filter including homomorphism filtering, multi-scale multi-resolution image enhancement applied [1].III. DIFFERENTIAL EVOLUTION ALGORITHMDifferential Evolution (DE) was first proposed by Price and Storn, and with other evolutionary algorithms are compared, DE algorithm has a strong spatial search capability, and easy to implement, easy to understand. DE algorithm is a novel search algorithm, it isfirst in the search space randomly generates the initial population and then calculate the difference between any two members of the vector, and the difference is added to the third member of the vector, by which Method to form a new individual. If you find that the fitness of new individual members better than the original, then replace the original with the formation of individual self.The operation of DE is the same as genetic algorithm, and it conclude mutation, crossover and selection, but the methods are different. We suppose that the group size is P, the vector dimension is D, and we can express the object vector as (1): xi=[xi1,xi2,…,xiD] (i =1,…,P) (1)And the mutation vector can be expressed as (2):()321r r r i X X F X V -⨯+= i=1,...,P (2) 1r X ,2r X ,3r X are three randomly selected individuals from group, and r1≠r2≠r3≠i.F is a range of [0, 2] between the actual type constant factor difference vector is used to control the influence, commonly referred to as scaling factor. Clearly the difference between the vector and the smaller the disturbance also smaller, which means that if groups close to the optimum value, the disturbance will be automatically reduced.DE algorithm selection operation is a "greedy " selection mode, if and only if the new vector ui the fitness of the individual than the target vector is better when the individual xi, ui will be retained to the next group. Otherwise, the target vector xi individuals remain in the original group, once again as the next generation of the parent vector.IV . HYBRID GA FOR IMAGE ENHANCEMENT IMAGEenhancement is the foundation to get the fast object detection, so it is necessary to find real-time and good performance algorithm. For the practical requirements of different systems, many algorithms need to determine the parameters and artificial thresholds. Can use a non-complete Beta function, it can completely cover the typical image enhancement transform type, but to determine the Beta function parameters are still many problems to be solved. This section presents a Beta function, since according to the applicable method for image enhancement, adaptive Hybrid genetic algorithm search capabilities, automatically determines the transformation function of the parameter values in order to achieve adaptive image enhancement.The purpose of image enhancement is to improve image quality, which are more prominent features of the specified restore the degraded image details and so on. In the degraded image in a common feature is the contrast lower side usually presents bright, dim or gray concentrated. Low-contrast degraded image can be stretched to achieve a dynamic histogram enhancement, such as gray level change. We use Ixy to illustrate the gray level of point (x, y) which can be expressed by (3).Ixy=f(x, y) (3) where: “f” is a linear or nonlinear function. In general, gray image have four nonlineartranslations [6] [7] that can be shown as Figure 1. We use a normalized incomplete Beta function to automatically fit the 4 categories of image enhancement transformation curve. It defines in (4):()()()()10,01,1011<<-=---⎰βαβαβαdt t t B u f u (4)where:()()⎰---=10111,dt t t B βαβα (5)For different value of α and β, we can get response curve from (4) and (5).The hybrid GA can make use of the previous section adaptive differential evolution algorithm to search for the best function to determine a value of Beta, and then each pixel grayscale values into the Beta function, the corresponding transformation of Figure 1, resulting in ideal image enhancement. The detail description is follows:Assuming the original image pixel (x, y) of the pixel gray level by the formula (4), denoted by xy i ,()Ω∈y x ,, here Ω is the image domain. Enhanced image is denoted by Ixy. Firstly, the image gray value normalized into [0, 1] by (6).min max min i i i i g xy xy --=(6) where: max i and m in i express the maximum and minimum of image gray relatively.Define the nonlinear transformation function f(u) (0≤u ≤1) to transform source image to Gxy=f(xy g ), where the 0≤ Gxy ≤ 1.Finally, we use the hybrid genetic algorithm to determine the appropriate Beta function f (u) the optimal parameters α and β. Will enhance the image Gxy transformed antinormalized.V. EXPERIMENT AND ANALYSISIn the simulation, we used two different types of gray-scale images degraded; the program performed 50 times, population sizes of 30, evolved 600 times. The results show that the proposed method can very effectively enhance the different types of degraded image.Figure 2, the size of the original image a 320 × 320, it's the contrast to low, and some details of the more obscure, in particular, scarves and other details of the texture is not obvious, visual effects, poor, using the method proposed in this section, to overcome the above some of the issues and get satisfactory image results, as shown in Figure 5 (b) shows, the visual effects have been well improved. From the histogram view, the scope of the distribution of image intensity is more uniform, and the distribution of light and dark gray area is more reasonable. Hybrid genetic algorithm to automatically identify the nonlinear transformation of the function curve, and the values obtained before 9.837,5.7912, from the curve can be drawn, it is consistent with Figure 3, c-class, that stretch across the middle region compression transform the region, which were consistent with the histogram, the overall original image low contrast, compression at both ends of the middle regionstretching region is consistent with human visual sense, enhanced the effect of significantly improved.Figure 3, the size of the original image a 320 × 256, the overall intensity is low, the use of the method proposed in this section are the images b, we can see the ground, chairs and clothes and other details of the resolution and contrast than the original image has Improved significantly, the original image gray distribution concentrated in the lower region, and the enhanced image of the gray uniform, gray before and after transformation and nonlinear transformation of basic graph 3 (a) the same class, namely, the image Dim region stretching, and the values were 5.9409,9.5704, nonlinear transformation of images degraded type inference is correct, the enhanced visual effect and good robustness enhancement.Difficult to assess the quality of image enhancement, image is still no common evaluation criteria, common peak signal to noise ratio (PSNR) evaluation in terms of line, but the peak signal to noise ratio does not reflect the human visual system error. Therefore, we use marginal protection index and contrast increase index to evaluate the experimental results.Edgel Protection Index (EPI) is defined as follows:(7)Contrast Increase Index (CII) is defined as follows:min max min max,G G G G C C C E O D +-== (8)In figure 4, we compared with the Wavelet Transform based algorithm and get the evaluate number in TABLE I.Figure 4 (a, c) show the original image and the differential evolution algorithm for enhanced results can be seen from the enhanced contrast markedly improved, clearer image details, edge feature more prominent. b, c shows the wavelet-based hybrid genetic algorithm-based Comparison of Image Enhancement: wavelet-based enhancement method to enhance image detail out some of the image visual effect is an improvement over the original image, but the enhancement is not obvious; and Hybrid genetic algorithm based on adaptive transform image enhancement effect is very good, image details, texture, clarity is enhanced compared with the results based on wavelet transform has greatly improved the image of the post-analytical processing helpful. Experimental enhancement experiment using wavelet transform "sym4" wavelet, enhanced differential evolution algorithm experiment, the parameters and the values were 5.9409,9.5704. For a 256 × 256 size image transform based on adaptive hybrid genetic algorithm in Matlab 7.0 image enhancement software, the computing time is about 2 seconds, operation is very fast. From TABLE I, objective evaluation criteria can be seen, both the edge of the protection index, or to enhance the contrast index, based on adaptive hybrid genetic algorithm compared to traditional methods based on wavelet transform has a larger increase, which is from This section describes the objective advantages of the method. From above analysis, we can see that this method.From above analysis, we can see that this method can be useful and effective.VI. CONCLUSIONIn this paper, to maintain the integrity of the perspective image information, the use of Hybrid genetic algorithm for image enhancement, can be seen from the experimental results, based on the Hybrid genetic algorithm for image enhancement method has obvious effect. Compared with other evolutionary algorithms, hybrid genetic algorithm outstanding performance of the algorithm, it is simple, robust and rapid convergence is almost optimal solution can be found in each run, while the hybrid genetic algorithm is only a few parameters need to be set and the same set of parameters can be used in many different problems. Using the Hybrid genetic algorithm quick search capability for a given test image adaptive mutation, search, to finalize the transformation function from the best parameter values. And the exhaustive method compared to a significant reduction in the time to ask and solve the computing complexity. Therefore, the proposed image enhancement method has some practical value.REFERENCES[1] HE Bin et al., Visual C++ Digital Image Processing [M], Posts & Telecom Press,2001,4:473~477[2] Storn R, Price K. Differential Evolution—a Simple and Efficient Adaptive Scheme forGlobal Optimization over Continuous Space[R]. International Computer Science Institute, Berlaey, 1995.[3] Tubbs J D. A note on parametric image enhancement [J].Pattern Recognition.1997,30(6):617-621.[4] TANG Ming, MA Song De, XIAO Jing. Enhancing Far Infrared Image Sequences withModel Based Adaptive Filtering [J] . CHINESE JOURNAL OF COMPUTERS, 2000, 23(8):893-896.[5] ZHOU Ji Liu, LV Hang, Image Enhancement Based on A New Genetic Algorithm [J].Chinese Journal of Computers, 2001, 24(9):959-964.[6] LI Yun, LIU Xuecheng. On Algorithm of Image Constract Enhancement Based onWavelet Transformation [J]. Computer Applications and Software, 2008,8.[7] XIE Mei-hua, WANG Zheng-ming, The Partial Differential Equation Method for ImageResolution Enhancement [J]. Journal of Remote Sensing, 2005,9(6):673-679.基于混合遗传算法的图像增强技术摘要—在图像增强之中,塔布斯提出了归一化不完全β函数表示常用的几种使用的非线性变换函数对图像进行研究增强。

非局部主成分分析极大似然估计MRI图像Rician噪声去噪

非局部主成分分析极大似然估计MRI图像Rician噪声去噪

W U Xi, 一
Z HOU J— i iL u
XI i gYu n E M n — a
( e at n o l t ncE gn eig hn d nvr t I o m t n T c n l y h n d 1 2 5 h n ) Dp r me tfE e r i n i i ,C e g u U i s y o n r ai eh oo ,C e g u6 0 2 ,C ia co e' n ei f f o g
( oeeo o p t c ne ScunU i rt,C eg u6 0 6 ,C ia C lg C m ue Si c, i a nv sy hn d 10 5 hn ) l f r e h ei
Abs r c :As t e o s i t e ta t h n ie n h MRI r u d r h Ri i n iti u i n, i l me t t n f c mmo l u e a e n e t e c a d srb t o mp e n a i o o o ny s d
噪声方法 、 采用参数修正非局部 均值去除 R c n噪声方法 、 特定噪声模型 的全变差方 法 , 不同 噪声 等级 和不 同 ia i 无 对
纹 理 复 杂 度 的 图像 进 行 定 性 和 定 量 的 去 噪 实 验 。结 果 表 明 , 提 出 的 方 法 可 在 保 持 图 像 细 节 和 纹 理 信 息 的 前 提 下 所
M a i u k lh o tm a i n I a e D e ii g U sng No Lo a x m m Li ei o d Esi to m g no sn i n- c l
Pr n i l m po ntAna y i i c p e Co ne l ss

虹膜图像识别处理外文翻译

虹膜图像识别处理外文翻译

外文一:AbstractThe biological features recognition is one kind of basis human body own inherent physiology characteristic and the behavior characteristic distinguishes the status the technology,Namely through the computer and optics, acoustics, the biosensor and the biometrics principle and so on high tech method unifies closely,Carry on individual status using the human body inherent physiology characteristic and the behavior characteristic the appraisal。

The biological features recognition technology has is not easy to forget, the forgery-proof performance good, not easy forge or is robbed, “carries” along with and anytime and anywhere available and so on merits.Iris recognition is a new method for man identification based on the biological features, which has the significant value in the information and security field. Combined with the previous work of other researchers, a discussion is elaborately made on the key techniques concerning the capture of iris images, location of iris circle and some improved and approaches to these problems are put forward. The location of iris recognition is realized which proves efficient.Iris location is a crucial part in the process of iris recognition,thus obtaining the iris localization precisely and fleetly is the prelude of effective iris localization .Iris location of is a kernel procession in an iris recognition system. The speed an accuracy of the iris location decide the performance of the iris recognition system.Take the advantages of the iris image, per-processes the images, decides the pesudo –center of pupil by a method of gray projection .Then the application calculus operator law carries on inside and outside the iris the boundary localization,in this paper ,this algorithm is based on the Daugman algorithm .Finally realizes the localization process in matlab.Keywords: Iris location,Biological features recognition,Calculus operator,Daugman algorithmTable of ContentsThe 1 Chapter Introduction1.1 The research background of iris recognition (6)1.2 The purpose and significance (8)1.3 Domestic and foreign research (9)Chapter 2 of iris recognition technology Introduction2.1 biometric identification technology (14)2.1.1 The status and development (14)2.1.2 Several biometric technology (17)2.2Iris recognition technology (23)2.3 Summary (26)Chapter 3 Research Status of iris location algorithm3.1Several common localization algorithm (27)3.1.1 Hough transform method (27)3.1.2 Geometric features location method (28)3.1.3 Active contour positioning method (29)3.2 Positioning algorithm studied (31)Chapter 4 operator calculus based iris localization algorithm4.1Image preprocessing (34)4.1.1Iris image smoothing (denoising) (36)4.1.2 Sharpen the image (filter)..................37.4.2Coarse positioning the inner edge of the iris (39)4.3 the iris to locate calculus operator law (40)4.4 Summary (41)Chapter 5 Conclusion (41)References (43)The first chapter1.1 The research background of iris recognitionBiometrics is a technology for personal identification using physiological characteristics and behavior characteristics inherent in the human body. Can be used for the biological characteristics of biological recognition, fingerprint, hand type face, iris, retina, pulse, ear etc.. Behavior has the following characteristics: signature, voice, gait, etc.. Based on these characteristics, it has been the development of hand shape recognition, fingerprint recognition, facial recognition, iris recognition, signature recognition and other biometric technology, many techniques have been formed and mature to application of.Biological recognition technology in a , has a long history, the ancient Egyptians throughidentification of each part of the body size measure to carry out identity may be the earliest human based on the earliest history of biometrics. But the modern biological recognition technology began in twentieth Century 70 time metaphase, as biometric devices early is relatively expensive, so only a higher security level atomic test, production base.due to declining cost of microprocessor and various electronic components, precision gradually improve, control device of a biological recognition technology has been gradually applied to commerce authorized, such as access control, attendance management, management system, safety certification field etc..All biometric technology, iris recognition is currently used as a convenient and accurate. Making twenty-first Century is information technology, network technology of the century, is also the human get rid of traditional technology, more and more freedom of the century. In the information, free for the characteristics of the century, biometric authentication technology, high-tech as the end of the twentieth Century began to flourish, will play a more and more important role in social life, fundamentally change the human way of life . Characteristics of the iris, fingerprint, DNA the body itself, will gradually existing password, key, become people lifestyle, instead of at the same time, personal data to ensure maximum safety, maximize the prevention of various types of crime, economic crime.Iris recognition technology, because of its unique in terms of acquisition, accuracy and other advantages, will become the mainstream of biometric authentication technology in the future society. Application of safety control, the customs import and export inspection, e-commerce and other fields in the future, is also inevitable in iris recognition technology as the focus. This trend, now in various applications around the world began to appear in the.1.2 Objective and significance of iris recognitionIris recognition technology rising in recent years, because of its strong advantages and potential commercial value, driven by some international companies and institutions have invested a lot of manpower, financial resources and energy research. The concept of automatic iris identification is first proposed by Frown, then Daugman for the first time in the algorithm becomes feasible.The iris is a colored ring in the pupil in the eye of fabric shape, each iris contains a structure like the one and only based on the crown, crystalline, filaments, spots, structure, concave point, ray, wrinkles and fringe characteristic. The iris is different from the retina, retinal is located in the fundus, difficult to image, iris can be seen directly, biometric identification technology can obtain the image of iris fine with camera equipment based on the following basis: Iris fibrous tissue details is rich and complicated, and the formation and embryonic tissue of iris details the occurrence stage of the environment, have great random the. The inherent characteristics of iris tissue is differ from man to man, even identical twins, there is no real possibility of characteristics of the same.When the iris are fully developed, he changes in people's life and tiny. In the iris outer, with a layer of transparent corneal it is separated from the outside world. So mature iris less susceptible to external damage and change.These characteristics of the iris has the advantages, the iris image acquisition, the human eye is not in direct contact with CCD, CMOS and other light sensor, uses a non technology acquisition invasion. So, as an important biometric identity verification system, iris recognition by virtue of the iris texture information, stability, uniqueness and non aggressive, more and more attention from both academic and industrial circles.1.3 Status and application of domestic and foreign research on iris recognitionIDC (International Data Group) statistics show that: by the end of 2003, the global iris recognition technology and related products market capacity will reach the level of $2000000000. Predicted conserved survey China biometric authentication center: in the next 5 years, only in the Chinese, iris recognition in the market amounted to 4000000000 rmb. With the expansion of application of the iris recognition technology, and the application in the electronic commerce domain, this number will expand to hundreds of billions.The development of iris recognition can be traced back to nineteenth Century 80's.. In 1885, ALPHONSE BERTILLON will use the criminal prison thoughts of the application of biometrics individual in Paris, including biological characteristics for use at the time: the size of the ears, feet in length, iris.In 1987, ARAN SAFIR and LEONARD FLOM Department of Ophthalmology experts first proposed the concept, the use of automatic iris recognition iris image in 1991, USA Los ala Moss National Laboratory JOHNSON realized an automatic iris recognition system.In 1993, JOHN DAUGMAN to achieve a high performance automatic iris recognition system.In 1997, the first patent Chinese iris recognition is approved, the applicant, Wang Jiesheng.In 2005, the Chinese Academy of Sciences Institute of automation, National Laboratory of pattern recognition, because of outstanding achievement "in recognition of" iris image acquisition and aspects, won the two "National Technology Invention Prize", the highest level represents the development of iris recognition technology in china.In 2007 November, "requirements for information security technology in iris recognition system" (GB/T20979-2007) national standards promulgated and implemented, the drafting unit: Beijing arithen Information Technology Co., ltd..Application of safety control, the customs import and export inspection, e-commerce and other fields in the future, is also inevitable in iris recognition technology as the focus. This trend, now in various applications around the world began to appear in the. In foreign countries, iris recognition products have been applied in a wide range.In February 8, 2002, the British Heathrow Airport began to test an advanced security system, the new system can scan the passenger's eyes, instead of to check passports. It is reported, the pilot scheme for a period of five months, a British Airways and virgin Airlines passengers can participate in this test. The International Air Transport Association interested in the results of this study are, they encourage the Heathrow Airport to test, through the iris boarding passengers to determine its identity as a boarding pass.Iris recognition system America "Iriscan" developed has been applied in the three business department of Union Bank of American Texas within. Depositors to be left with nothing whatsoever to banking, no bank card password, no more memories trouble. They get money fromthe A TM, a camera first eye of the user to scan, and then scan the image into digital information and data check, check the user's identity.America Plumsted school in New Jersey has been in the campus installed device of iris recognition for security control of any school, students and staff are no longer use cards and certificates of any kind, as long as they passed in the iris camera before, their location, identity is system identification, all foreign workers must be iris data logging to enter the campus. At the same time, through the central login and access control system to carry on the control to enter the scope of activities. After the installation of the system, various campus in violation of rules and infringement, criminal activity is greatly reduced, greatly reducing the campus management difficulty.In Afghanistan, the United Nations (UN) and the United Nations USA federal agency refugee agency (UNHCR) using iris recognition system identification of refugees, to prevent the same refugee multiple receive relief goods. Use the same system in refugee camps in Pakistan and Afghanistan. A total of more than 2000000 refugees use iris recognition system, this system to a key role for the United Nations for distribution of humanitarian aid from.In March 18, 2003, Abu Zabi (one of the Arabia and the United Arab Emirates) announced the iris recognition technology for expelled foreigners iris tracking and control system based on the borders opened the world's first set of national level, this system began construction from 2001, its purpose is to prevent all expelled by Abu Zabi tourists and other personnel to enter the Abu Zabi. Without this system in the past, due to the unique characteristics of the surface of the Arabs (Hu Xuduo), and the number of the expulsion of the numerous, customs inspection staff is very difficult to distinguish between what is a deported person. By using this system, illegal immigration, all be avoided, the maximum guarantee of national security.Kennedy International Airport in New Jersey state (John F. Kennedy International Airport) of the iris recognition system installed on its international flights fourth boarding port, 300 of all 1300 employees have already started to use the system login control. By using this system, all can enter to the apron personnel must be after the system safety certification of personnel. Unauthorized want to break through, the system will automatically take emergency measures to try to force through personnel closed in the guard space. Using this system, the safety grade Kennedy International Airport rose from B+ to A+ grade. The Kennedy International Airport to travel to other parts of the passengers has increased by 18.7%.Generally speaking, the iris recognition technology has already begun in all walks of life in various forms of application in the world. At the same time, to the application of their units of all had seen and what sorts of social benefits and economic benefits are not see. This trend is to enhance the high speed, the next 10 years will be gradually achieve the comprehensive application of iris recognition in each industry.In China, due to the Chinese embargo and iris technology itself and the difficulty in domestic cannot develop products. So far, there has not been a real application of iris recognition system. However, many domestic units are expressed using strong intention, especially the "9 · 11" later, security anti terrorism consciousness has become the most concerned problems in the field of aviation, finance. Iris recognition system is a major airline companies, major financial institutions and other security mechanisms (such as aerospace bureau) become the focus of attention of object and other key national security agency. As with the trend of development in the world, iris recognition technology will in the near future in application China set off climax.The second chapter of introduction of iris recognition technology2.1 Technology of biological feature recognition based on2.1.1 Present status and development of biological feature recognition“9.11" event is an important turning point in the devel opment of biometric identification technology in the world, the importance of it makes governments more clearly aware of the biological recognition technology. Traditional identity recognition technologies in the face of defect anti terrorism has shown, the government began a large-scale investment in the research and application of biometric technology. At the same time, the public understanding of biological recognition technology with "9.11" exposure rate and greatly improve the.The traditional method of individual identification is the identity of the people with knowledge, identity objects recognition. The so-called identity: knowledge refers to the knowledge and memory system of personal identification, cannot be stolen, and the system is easy to install, but once the identification knowledge stolen or forgotten, the identity of easily being fake or replaced, this method at present in a wide range of applications. For example: the user name and password. The so-called identity items: refers to the person, master items. Although it is stable and reliable, but mainly depend on the outer body, lost or stolen identification items once proof of identity, the identity of easily being fake or replaced, for example: keys, certificates, magnetic card, IC card etc..Biometric identification technology is related to physical characteristics, someone using prior record of behavior, to confirm whether the facts. Biometric identification technology can be widely used in all fields of society. For example: a customer came into the bank, he did not take bank card, also did not remember the password directly drawing, when he was drawing in the drawing machine, a camera to scan on his eyes, and then quickly and accurately complete the user identification and deal with business. This is the application of the iris recognition system of modern biological identification technology. "".America "9.11" after the incident, the anti terrorist activity has become the consensus of governments, it is very important to strengthen the security and defense security at the airport, some airports USA can in the crowd out a face, whether he Is it right? Wanted. This is the application of modern technology in biological feature recognition "facial recognition technology".Compared with the traditional means of identity recognition, biometric identity recognition technology in general has the following advantages:(1) the security performance is good, not easy to counterfeit or stolen.(2) carry, whenever and wherever possible, therefore more safety and security and other identification method.For the biological information of biometric recognition, its basic nature must meet the following three conditions: universality, uniqueness and permanency.The so-called universality, refers to any individual has the. Uniqueness, is in addition to other than himself, other people did not have any, namely different. The so-called permanent, refers to the character does not change over time, namely, life-long.Feature selection of organisms with more than three properties, is the first step of biological recognition.In addition, there are two important indexes in biological recognition technology. The rejection rate and recognition rate. Adjusting the relation of these two values is very important. The reject rate, the so-called false rejection, this value is high, use frequency is low, the errorrecognition, its value is high, safety is relatively reduced. So in the biological identification of any adjustment, the two index is a can not abandon the process. The choice of range size, related to the biological identification is feasible and available .And technology of identity recognition based on iris feature now appears, it is the development of biometric identification technology quickly, due to its uniqueness, stability, convenience and reliability, so the formation of biometric identification technology has the prospects for development.Generally speaking, the biological recognition system consists of 4 steps. The first step, the image acquisition system of collecting biometric image; the second step, the biological characteristics of image preprocessing (location, normalization, image enhancement and so on); the third step, feature information extraction, converted into digital code; the fourth step, the generation of test code and database template code to compare, make identification。

深度学习技术中的图像去雾方法与实践指南

深度学习技术中的图像去雾方法与实践指南

深度学习技术中的图像去雾方法与实践指南图像去雾(image dehazing)是深度学习领域中一个重要的问题,它可以提高图像的可视化效果和质量。

尤其在计算机视觉、图像处理和图形学等领域中,图像去雾技术具有广泛的应用前景。

在深度学习技术中,图像去雾方法的目标是消除图像中由于大气散射导致的雾气效应,还原出清晰的图像。

为了达到这个目标,研究者们提出了一系列的图像去雾方法。

下面将从基本原理、常用方法和实践指南等方面进行讨论。

首先,基于深度学习的图像去雾方法的基本原理是通过训练一个神经网络模型来估计图像中的雾气传播模型,从而消除雾气效应。

这种方法的优点在于它可以自动地学习和提取特征,从而更好地适应不同类型的图像和场景。

常见的深度学习模型包括卷积神经网络(CNN)、生成对抗网络(GAN)和循环神经网络(RNN)等。

其次,常用的图像去雾方法包括单图去雾和多图去雾。

单图去雾方法通过分析输入图像的局部特征和全局信息,利用卷积神经网络模型进行去雾处理。

这些方法的基本原理是通过训练神经网络来估计传输矩阵,然后通过去卷积操作和优化算法来还原清晰图像。

而多图去雾方法则利用多个输入图像的信息来降低雾气效应,通常使用生成对抗网络和循环神经网络来实现,在提高去雾效果的同时,避免过度增加计算量。

为了提高图像去雾方法的效果,一些实践指南也被提出。

首先,选择合适的训练数据集对于模型的训练非常重要。

清晰的图像和对应的雾气图像对模型的训练和效果评估起着关键作用。

其次,合理选择网络结构和参数设置,可以提高模型在不同场景下的泛化能力和鲁棒性。

此外,采用适当的数据增强方法可以增加训练样本的多样性,提高模型的鲁棒性和泛化能力。

最后,在进行图像去雾处理时,需要根据实际情况进行参数调整,以获得更好的效果。

除了基于深度学习的图像去雾方法,还有其他一些传统的图像去雾方法可以参考。

例如,暗通道先验方法利用图像的暗通道特性来估计雾气浓度,进而去除雾气效应。

何恺明在CVPR会议上演讲时的ppt——基于暗通道先验的单幅图像去雾技术

何恺明在CVPR会议上演讲时的ppt——基于暗通道先验的单幅图像去雾技术
Haze-free
A Surprising Observation
1
Prob.
0.8
0.6
86% pixels in [0, 16]
0.4
5,000 haze-free images
0.2
0 0 64 128 192 256
Pixel intensity of dark channels
Dark Channel Prior
Hazy image
Scene radiance
Transmission
Haze Imaging Model
I J t A (1 t )
d ln t
Depth
Transmission
Ambiguity in Haze Removal
scene radiance
…. input
~2 T (t ) t t t Lt
Data term Smoothness term
• L - matting Laplacian [Levin et al., CVPR ‘06]
• Constraint - soft, dense (matting - hard, sparse)
Thank you
[Tan, CVPR 08]
– Independent Component Analysis [Fattal, Siggraph 08]
Priors in Computer Vision
prior Ill-posed problem well-posed problem
• Smoothness prior
• The dark channel is no longer dark.

imagesharp的用法 -回复

imagesharp的用法 -回复

imagesharp的用法-回复Imagesharp是一种强大而灵活的图像处理库,用于对图像进行各种操作,包括但不限于调整大小、调整亮度和对比度、旋转、裁剪、锐化等。

它是由.NET平台上的C#编程语言编写的,并且非常适合用于 Core 项目和其他.NET应用程序。

本文将逐步介绍Imagesharp的用法,并提供一些示例代码,以帮助读者更好地理解和应用此库。

第一步:安装Imagesharp要使用Imagesharp,您首先需要在项目中安装它。

您可以通过在NuGet 包管理器控制台中运行以下命令来完成安装:Install-Package SixLabors.ImageSharp这将下载并安装最新版本的Imagesharp库。

第二步:导入命名空间在代码文件中使用Imagesharp之前,您需要将其命名空间导入到文件的顶部。

您可以通过添加以下代码行来实现此目的:using SixLabors.ImageSharp;这样,您就可以在文件中使用Imagesharp的类型和方法了。

第三步:打开图像文件要在Imagesharp中处理图像,您需要首先打开一个图像文件,并将其加载到Image对象中。

您可以使用以下代码行打开一个图像文件:using (Image image = Image.Load("image.jpg")){图像处理操作将在这里执行}这将使用指定路径下的图像文件加载图像。

您可以使用不同的文件路径和文件类型(例如png,bmp等),根据您自己的需求进行调整。

第四步:执行图像处理操作在Image对象中加载图像之后,您可以开始执行各种图像处理操作。

以下是一些常见的示例:1.调整大小要调整图像的大小,您可以使用Resize方法。

以下是一个示例代码片段:csharpimage.Mutate(x => x.Resize(800, 600));这会将图像的大小调整为800像素宽度和600像素高度。

基于深度多级小波U-Net的车牌雾图去雾算法

基于深度多级小波U-Net的车牌雾图去雾算法

第49卷第6期2022年6月Vol.49,No.6Jun.2022湖南大学学报(自然科学版)Journal of Hunan University(Natural Sciences)基于深度多级小波U-Net的车牌雾图去雾算法陈炳权†,朱熙,汪政阳,梁寅聪(吉首大学信息科学与工程学院,湖南吉首416000)摘要:为了解决雾天拍摄的车牌图像边缘模糊、色彩失真的问题,提出了端到端的基于深度多级小波U-Net的车牌雾图去雾算法.以MWCNN为去雾网络的主体框架,利用“SOS”增强策略和编解码器之间的跨层连接整合小波域中的特征信息,采用离散小波变换的像素-通道联合注意力块降低去雾车牌图像中的雾度残留.此外,利用跨尺度聚合增强块补充小波域图像中缺失的空间域图像信息,进一步提高了去雾车牌图像质量.仿真实验表明,该网络在结构相似度和峰值信噪比上具有明显的优势,在处理合成车牌雾图和实际拍摄的车牌雾图上,去雾效果表现良好.关键词:车牌雾图去雾;MWCNN;“SOS”增强策略;跨层连接;注意力;跨尺度聚合中图分类号:TP391.41文献标志码:ADehazing Algorithm of License Plate Fog Image Basedon Deep Multi-level Wavelet U-NetCHEN Bingquan†,ZHU Xi,WANG Zhengyang,LIANG Yincong(College of Information Science and Engineering,Jishou University,Jishou416000,China)Abstract:To solve the problem of edge blurring and color distortion of license plate images taken in foggy weather,an end-to-end depth multilevel wavelet U-Net based algorithm for license plate fog image removal is pre⁃sented.Taking MWCNN as the main frame work of the defogging network,the feature information in the wavelet do⁃main is integrated using the“SOS”enhancement strategy and the cross-layer connection between the codec.The pixel-channel joint attention block of the discrete wavelet transform is used to reduce the fog residue in the defrosted license plate image.In addition,the cross-scale aggregation enhancement blocks are used to supplement the missing spatial domain image information in the wavelet domain image,which further improves the quality of the defogging li⁃cense plate image.The simulation results show that the network has obvious advantages in structural similarity and peak signal-to-noise ratio,and it performs well in dealing with the composite plate fog image and the actual photo⁃graphed plate fog image.Key words:license plate fog image defogging;MWCNN;“SOS”enhancement strategy;cross-layer connection;attention mechanism;cross-scale aggregation∗收稿日期:2021-11-01基金项目:国家自然科学基金资助项目(No.62141601),National Natural Science Foundation of China(No.62141601);湖南省教育厅重点资助项目(21A0326),The SRF of Hunan Provincial Education Department(No.21A0326)作者简介:陈炳权(1972-),男,湖南常德人,吉首大学副教授,硕士生导师,博士†通信联系人,E-mail:****************文章编号:1674-2974(2022)06-0124-11DOI:10.16339/ki.hdxbzkb.2022293第6期陈炳权等:基于深度多级小波U-Net的车牌雾图去雾算法在大雾天气下使用光学成像器件(如相机、监控摄像头等)对目标场景或物体进行拍摄时,往往会使得图像对比度低,边缘、字符等细节信息模糊.图像去雾是图像处理中典型的不适定问题,旨在从雾天图像中复原出相应的无雾图像,作为一种提升图像质量的方法,已被广泛应用于图像分类、识别、交通监控等领域.近年来,针对不同场景(室内家居场景、室外自然场景、交通道路场景、夜间雾霾场景等)下均匀雾度或非均匀雾度图像的去雾技术引起了广泛关注与研究,但由于实际雾霾对图像影响的复杂多变性,从真实的雾天图像中复原无雾图像仍具有诸多挑战性.图像去雾技术发展至今,主要分为以下三类:基于数学模型的去雾技术,如直方图均衡[1]、小波变换[2]、色彩恒常性理论[3](Retinex)等;基于大气散射模型(ASM)和相关统计先验的去雾技术,如暗通道先验[4-5](DCP)、色衰减先验[6](CAP)、非局部先验[7](NLP)等;基于深度学习的去雾技术,如Deha⁃zeNet[8]、DCPDN[9]、AODNet[10]等.近年来,深度卷积神经网络在计算机视觉中应用广泛,2019年Liu 等[11]认为传统的卷积神经网络(CNN)在采用池化或空洞滤波器来增大感受野时势必会造成信息的丢失或网格效应,于是将多级小波变换嵌入到CNN中,在感受野大小和计算效率之间取得了良好的折中,因而首次提出了多级小波卷积神经网络(MWCNN)模型,并证实了其在诸如图像去噪、单图像超分辨率、图像分类等任务中的有效性.同年,Yang等[12]也认为离散小波变换及其逆变化可以很好地替代U-Net 中的下采样和上采样操作,因而提出了一种用于单幅图像去雾的小波U-Net网络,该网络与MWCNN结构非常相似.2020年,Yang等[13]将多级小波与通道注意力相结合设计了一种小波通道注意力模块,据此构建了单幅图像去雨网络模型.同年,Peng等[14]则将残差块与MWCNN相结合提出了一种用于图像去噪的多级小波残差网络(MWRN).2021年,陈书贞等[15]在已有的MWCNN结构上加入多尺度稠密块以提取图像的多尺度信息,并在空间域对重构图像进行进一步细化处理,以弥补小波域和空间域对图像信息表示存在的差异性,从而实现了图像的盲去模糊.为了解决大雾天气下车牌图像对比度低和边缘、字符等信息模糊不清的问题,很多研究人员开始将已有的图像去雾技术应用于车牌识别的预处理中.但大多数只是对已有图像去雾算法进行了简单改进,如对Retinex或DCP等去雾算法进行改进,直接应用于车牌检测识别中.虽然取得了一定的去雾效果,但其并没有很好地复原出车牌图像的特征,且很难应对中等雾和浓雾下的车牌图像.2020年王巧月等[16]则有意识地针对车牌图像的颜色和字符信息进行车牌图像的去雾,提高了车牌图像的质量.受上述研究的启发,本文提出一种基于深度多级小波U-Net的车牌雾图去雾算法,以端到端的方式来实现不同雾度下不同车牌类型的去雾.首先,提出了一种结合DWT、通道注意力(CA)和像素注意力(PA)的三分支结构,该结构可以对编码器每层输出特征的通道和像素进行加权处理,从而让去雾网络聚焦于车牌雾图中的有雾区域;其次,在解码器中引入“SOS”增强模块(“SOS”Block)来对解码器和下层输入的特征进行进一步融合和增强,提高去雾图像的峰值信噪比,并在U-Net编解码结构之间进行了层与层之间的连接,以此来充分利用不同网络层和尺度上的特征信息;最后,为弥补小波域和空间域之间网络表达图像信息的差异,提出了一种结合跨尺度聚合(CSA)的多尺度残差增强模块(CSAE Block),在空间域上进一步丰富网络对于图像细节信息的表达,有效地提高去雾图像的质量.1去雾网络结构本文去雾网络结构如图1所示.该网络主要分为A与B这2大模块,前者在小波域中实现对车牌雾图x LPHaze的去雾,后者在空间域上对模块A输出的无雾图像进行进一步优化,模块A 的网络结构参数见表1.整个网络结构的输出为:y LPDhaze=y B(y A(x LPHaze;θA);θB)(1)式中:y B(⋅)和y A(⋅)分别为模块A和B的输出,θA 和θB分别表示模块A和B的可学习参数.1.1小波U-Net二维离散小波变换(2D-DWT)可以实现对给定的图像I的小波域分解,分解过程可视为将图像I与4个滤波器进行卷积,即1个低通滤波器f LL和3个高通滤波器(f LH、f HL和f HH),这4个滤波器(即卷积核)分别由低通滤波器f L和高通滤波器f H构成.以Haar小波为例,该卷积核表示为:f L=12[]1,1T,f H=12[]1,-1Tf LL=LL T,f HL=HL T,f LH=LH T,f HH=HH T(2)125湖南大学学报(自然科学版)2022年图1去雾网络结构Fig.1Defogging network structure 表1模块A 网络结构参数Tab.1Network structure parameters of module A网络层层1层2层3层4类型注意力块卷积层残差组注意力块(下、上采样)卷积层残差组注意力块(下、上采样)卷积层残差组注意力块(下、上采样)卷积层残差组卷积层编码器(卷积核大小f ×f ,输出通道c )éëêêùûúú()2×2,1()1×1,4and ()1×1,16(3×3,16)éëêêùûúú()3×3,16()3×3,16×3éëêêùûúú()2×2,1()1×1,64(3×3,64)éëêêùûúú()3×3,64()3×3,64×3éëêêùûúú()2×2,1()1×1,256(3×3,256)éëêêùûúú()3×3,256()3×3,256×3éëêêùûúú()2×2,1()1×1,1024(3×3,1024)éëêêùûúú()3×3,1024()3×3,1024×3(3×3,1024)输出大小(宽W ,高H )(64,32)(64,32)(64,32)(32,16)(32,16)(32,16)(16,8)(16,8)(16,8)(8,4)(8,4)(8,4)(8,4)解码器(卷积核大小f ×f ,输出通道c )—éëêêùûúú()3×3,16()3×3,12éëêêùûúú()3×3,16()3×3,16×3—(3×3,64)éëêêùûúú()3×3,64()3×3,64×3—(3×3,256)éëêêùûúú()3×3,256()3×3,256×3————输出大小(宽W ,高H )—(64,32)(64,32)—(32,16)(32,16)—(16,8)(16,8)————层2层4层1层3层1层2层3层4离散小波变换DWT 卷积层ConvLayer 注意力块Attention Block “SOS ”增强块“SOS ”Block 残差组ResGroup 离散小波逆变换IDWT 跨尺度聚合增强块CSAE BlockTanh 层模块B模块A 层间多尺度聚合126第6期陈炳权等:基于深度多级小波U-Net 的车牌雾图去雾算法因此,2D-DWT 可以通过将输入图像I 与4个滤波器进行卷积和下采样来实现,从而获得4个子带图像I LL 、I LH 、I HL 和I HH .其操作定义如下:I LL =()f LL ∗I ↓2I LH =()f LH ∗I ↓2I H L=()f HL∗I ↓2I HH=()f HH∗I ↓2(3)其中:∗表示卷积操作;↓2表示尺度因子为2的标准下采样操作.低通滤波器用于捕获图像中光滑的平面和纹理信息,其它3个高通滤波器则提取图像中存在的水平、垂直和对角线方向上的边缘信息.同时,由于2D-DWT 的双正交性质,可以通过二维逆DWT 的精确重建出原始图像.2D-DWT 及其逆变换的分解和重建示意图如图2所示.本文将2D-DWT 及其逆变换嵌入到U-Net 网络结构中,改善原始U-Net 网络的结构.首先,对输入的3通道车牌雾图进行离散小波变换处理,输出图像的通道数变为原来的4倍,图像大小变为原来的12;然后,使用一个单独的卷积层(“3×3卷积+Lea⁃kyReLU ”)将输入图像扩展为16通道的图像;最后,在U-Net 的每层中迭代使用卷积层和离散小波变换用于提取多尺度边缘特征.1.2基于2D-DWT 的通道-像素联合注意力块(DCPA Block )在去雾网络中平等对待不同的通道和像素特征,对于处理非均匀雾度图像是不利的.为了能灵活处理不同类型的特征信息,Qin 等[17]和Wu 等[18]均采用CA 和PA ,前者主要用于对不同通道特征进行加权,而后者则是对具有不同雾度的图像像素进行加权,从而使网络更关注雾度浓厚的像素和高频区域.引入Hao-Hsiang Yang 等[13]的小波通道注意力块,本文提出了一种基于二维DWT 的通道-像素联合注意力模块,将DWT 、CA 和PA 构建并行的三分支结构,如图3所示.2D-DWT卷积3×3平均池化卷积Leaky ReluLeaky Relu 输入逐元素相乘残差组输出逐元素相加Leaky Relu 1×1卷积2×2Sigmoid 激活函数注意力块图3基于二维DWT 的特征融合注意力模块结构Fig.3Attention module structure of feature fusion basedon two-dimensional DWT其中,两分支结构的注意力块结合了PA (上分支)和CA (下分支)的特点,将具有通道数为C ,高和宽分别为H 、W 的特征图x ∈R C ×H ×W 分别输入到CA 和PA 中,前者通过平均池化将C×H×W 的空间信息转换为C×1×1的通道权重信息,而后者则通过卷积来将C×H×W 的图像转换成1×H×W.CA 由一个平均池化层、一个卷积层和一个LeakyReLU 层构成,其输出为:y CA =LeakyReLU 0.2(Conv 11(AvgPool (x)))(4)式中:y CA ∈R 1×H ×W ;Conv j i (⋅)表示卷积核大小为i ×i 、步长为j 的卷积操作;LeakyReLU 0.2(⋅)表示参数为0.2的LeakyReLU 激活函数;AvgPool (⋅)表示平均池化操作.类似地,PA 有一个卷积层和一个LeakyReLU层,但没有平均池化层,其输出为:y PA =LeakyReLU 0.2(Conv 22(x))(5)式中:y PA ∈R C ×1×1.CA 和PA 通过逐像素相加,并共用一个Sigmoid 激活函数来分别为输入图像通道和像素配置相应的权重参数,其输出为:y A =Sigmoid (y PA ⊕y CA)(6)式中:y A ∈R C ×H ×W ;⊕表示逐元素相加;Sigmoid (⋅)表示Sigmoid 激活函数.最后,和经离散小波变换、卷列行LLLI LII LH I HL HH22222222列I HL I LL I HH222HL H I H2L行HII LH 图22D-DWT 及其逆变换Fig.22D-DWT and its inverse transformation127湖南大学学报(自然科学版)2022年积层和残差组处理后的特征图进行逐元素相乘,从而实现对特征图的加权,其最终输出为:y DCPA =ResGroup (LeakyReLU 0.2(Conv 13(DWT (x))))⊗yA(7)式中:y DCPA ∈R C ×H ×W ;DWT (⋅)为离散小波变换;⊗表示逐元素相乘;ResGroup (⋅)表示残差组函数. 1.3层间多尺度聚合(LMSA )受到Park 等[19]在图像去雾任务中采用多级连接来复原图像细节信息的启示,将U-Net 编码器中前3层中DCPA Block 的输出进行跨层和跨尺度的特征聚合,使网络能充分利用图像潜在的特征信息,其结构如图4所示.假设编码器的第l 层输出的聚合特征为y lconcat ,输入解码器的第l 层特征为D l in ,其中l =1,2,3,则y lconcat =Cat((Cat i =13F l i (y l DCPA )),F up (D l +1out ))(8)D l in =LeakyReLU 0.2(Conv 11(LeakyReLU 0.2(y l SEBlock (y l concat))))(9)式中:Cat (⋅)表示级联操作;F l i (⋅)表示从第i 层到第l 层的采样操作;F up (⋅)表示上采样操作;D i +1out为解码器的第i +1层的输出.将每层聚合后的特征图x l concat 输入到SEBlock 中,自适应地调节各个通道特征,其输出为:y lSEBlock (x l concat )=Sigmoid (FC (ReLu (FC (AvgPool (xlconcat)))))(10)式中:FC (⋅)表示全连接层函数;ReLU (⋅)为ReLU 非线性激活函数.SEBlock 的结构如图5所示.平均池化全连接层全连接层ReLuSigmoid 激活函数S图5SEBlock 结构Fig.5SEBlock structure通过“LeakyRelu-Conv-LeakyRelu ”操作减少每层输出特征的通道数,输入到U-Net 解码器中前3层的“SOS ”Block 中,提升重构图像的峰值信噪比.U-Net 网络中的第4层则聚合前2层的DCPABlock 输出特征和第3层的DWT 输出特征,同样经过SEBlock 和“LeakyRelu-Conv-LeakyRelu ”操作后作为第4层的输入特征进行后续操作.其数学表达式为:y 4concat =Cat((Cat i =12F 4i (y 4DCPA )),E 3out )(11)D 4in =LeakyReLU 0.2(Conv 11(LeakyReLU 0.2(y 4SEBlock (y 4concat))))(12)其中,E 3out 表示解码器第3层的输出.1.4“SOS ”增强模块(“SOS ”Block )从Romano 等[20]和Dong 等[21]对“Strengthen-Operate-Subtract ”增强策略(“SOS ”)的开发和利用中可知,该增强算法能对输入的图像特征进行细化处理,可以提高输出图像的峰值信噪比.因此,在本文的车牌图像去雾网络解码器结构中,直接将Dong 等[21]采用的“SOS ”增强算法嵌入到U-Net 结构中,提升车牌去雾图像的质量.Dong 等[21]所采用的“SOS ”增强算法的近似数学表达式如下:J n +1=g (I +J n )-J n(13)层1DCPA Block输出层2DCPA Block输出层3DCPA Block输出编码器CCC层1“SOS ”Block层2“SOS ”Block层3“SOS ”Block解码器LeakyReLU-1×1Conv-LeakyReLU SEBlocky 2CSACaty 1coocat y 3CSACat y 2CSAEBlocky 1CSAEBlock y 3CSAEBlockD 2inD 1inD 3in图4层间多尺度聚合结构Fig.4Multi-scale aggregation structure128第6期陈炳权等:基于深度多级小波U-Net 的车牌雾图去雾算法其中:I 为输入雾图像;J n 为第n 层估计的去雾图像;I +J n 表示使用输入雾图进行增强的图像;g (⋅)表示去雾方法或者优化方法.在U-Net 中编解码器之间的嵌入方式如下:将U-Net 编解码器之间的连接方式改为了逐元素相加,即编码器第i 层输出的聚合特征D i in 加上对应的解码器的第i 层输入特征D i +1out(经上采样操作得到与D i in 相同的图片大小);将逐元素相加后的输出接入到优化单元(即残差组)中进行进一步的特征细化,随后减去解码器的第i 层输入D i +1out .通过上述嵌入方式,模块输出为:y i sos =g i (D i in +(D i +1out )↑2)-(D i +1out )↑2(14)其中:↑2表示尺度因子为2的上采样操作.其与U-Net 相结合的结构示意图如图6所示.解码器的第i +1层输出编码器的第i 层输出逐元素相减优化单元g (·)逐元素相加2D DWT2D IDWT图6“SOS ”深度增强模块Fig.6"SOS"depth enhancement module1.5跨尺度聚合增强模块(CSAE Block )为了弥补小波域中去雾网络所忽略的精细空间域图像特征信息,本文提出了一种基于残差组的跨尺度聚合增强模块(CSAE Block ),对小波域网络(模块A )最后输出的重构图像进行空间域上的图像细节特征补充.CSAE Block 结构如图7所示.CSAE Block 主要由卷积层(即“3×3卷积-Lea⁃kyReLU ”)、残差组、平均池化、CSA 和级联操作构成.首先,卷积层和残差组负责对模块A 输出的空间域图像y 模块A 进行特征提取,平均池化将输入特征分解为4个不同尺度(S 1=14,S 2=18,S 3=116和S 4=132)的输出,即:y S 1,y S 2,y S 3,y S 4=AvgPool (ResGroup (Conv 13(y 模块A)))(15)然后,CSA 实现对输入的不同尺度、空间分辨率的特征信息进行聚合,从而达到在所有尺度级别上有用信息的融合,并在每个分辨率级别上生成精细特征;最后,通过“LeakyRelu-Conv-LeakyRelu ”操作来对输入特征的通道数进行调整.该模块可以通过聚合不同尺度之间、不同分辨率之间的特征来使去雾网络获得较强的上下文信息处理能力.该聚合操作可表示如下:y SjCSA =⊕S i∈{}1,2,3,4F S j S i(y Si)(16)式中:y S jCSA 表示第j 个尺度S j 的CSA 输出特征,j =1,2,3,4;F S j Si(⋅)表示将尺度为S i 的特征图采样到尺度为S j 的特征图.同时,在该模块中引入短连接以改y 模块A1/161/321/81/4卷积层残差组平均池化跨尺度聚合(CSA )y ResGroupy S 2CSAy S 3CSAy S 4CSAy S 1CSA逐元素相加LeakyReLU-1×1Conv-LeakyReLU-1×1Conv-LeakyReLU-UpSample 3×3Conv-y CSACat级联Cy CSAEBlock图7CSAE Block 结构Fig.7CSAE block129湖南大学学报(自然科学版)2022年善其梯度流.综上所述,CSAE Block 总的输出表达式为:ìíîïïïïy CSACat =Cat j =14()F up ()LeakyReLU 0.2()Conv 11()y S jCSAy CSAE Block =Conv 13()Cat ()y CSACat ,y ResGroup (17)其中y CSACat 表示对上采样为统一大小的输出特征进行级联操作.1.6损失函数为了获得更好的去雾效果,本文使用3个损失函数(有图像去雾损失L rh 、边缘损失L edge 以及对比度增强损失L ce )作为网络训练的总损失L total ,即:L total =αL rh +γL edge -λL ce(18)其中:α,γ和λ均为任意非负常数.1)图像去雾损失函数.本文将L 1损失和L 2损失进行简单结合,构成车牌图像去雾损失函数:L rh =1N ∑i =1N ()I i gt -F Net ()I i haze 1+I i gt -F Net ()I ihaze 2(19)式中:N 表示输入图像像素数;I gt 为干净图像;I haze 为车牌雾图;F Net (⋅)表示车牌去雾网络函数.一方面通过L 1损失函数来获得较高的PSNR 和SSIM 值,另一方面则通过L 2损失函数来尽可能提高去雾图像的保真度.2)边缘损失函数.为了加强输出去雾图像的边缘轮廓信息,本文利用Sobel 边缘检测算法来获得车牌去雾图像和干净图像的边缘轮廓图,分别为E FNet()I haze和E I gt,计算两者的L 1范数,获得边缘损失函数:L edge=E FNet()I haze-E I gt1(20)3)对比度增强损失.为了提高车牌去雾图像的颜色对比度,本文最大限度地提升每个单独颜色通道的方差,即最大化如下表达式:L ce=(21)式中:x 表示图像的像素索引;FˉNet (I haze )为去雾网络输出图像F Net (I haze )的平均像素值.值得注意的是,所期望的输出去雾图像应该增强其对比度,所以L ce 需要最大化,因此在总损失L total 中应减去该项.2训练与测试2.1车牌雾图数据集(LPHaze Dataset )的制作为了解决车牌去雾网络训练过程中缺失的车牌雾图数据集问题,受RESIDE 数据集制作方法的启示,本文采用成熟的ASM 理论来进行车牌雾图数据集的制作.车牌图像数据主要来源于OpenITS 提供的OpenData V3.1-SYSU 功能性能车牌图像数据库,并以中科大开源数据集CCPD [22]作为补充,具体制作方法如下:1)预处理.从OpenITS 和CCPD 数据集中随机选取2291张清晰图像,并对这些清晰车牌图像的车牌区域进行截取;2)配置大气散射模型参数值.参照RESIDE 数据集所选取的参数值范围,随机选取如下一组大气光值A =[0.6,0.7,0.8,0.9,1.0]和一组大气散射系数值β=[0.4,0.6,0.8,1.0,1.2,1.4,1.6],并将场景深度d (x )置为1;3)合成车牌有雾-无雾图像对.采取一张清晰车牌图像对应多张车牌雾图的方法来合成图像对,即根据大气散射模型,结合步骤2中选定的参数值,以一张车牌无雾图像对应35张有雾图像的方法来合成数据集.合成车牌雾图示例如图8所示.(a )原图(b )(A =0.6,β=0.4)(c )(A =0.7,β=0.8)(d )(A =0.8,β=1.2)(e )(A =1.0,β=1.6)图8合成车牌雾图示例Fig.8Example of fog map of composite license plate4)划分训练集和验证集.训练集中干净车牌图像共1697张,对应的车牌雾图共59395张;验证集中干净图像共594张,对应车牌雾图共20790张.2.2实验设置本文采用自制的车牌雾图数据集(LPHaze Data⁃130第6期陈炳权等:基于深度多级小波U-Net 的车牌雾图去雾算法set )作为车牌图像去雾网络的训练和验证数据,其中所有图像像素大小均设置为64×128×3,batch size 为64.并对训练数据进行数据增强操作:随机水平翻转和随机垂直翻转(翻转概率随机取值0或1),以提升网络的鲁棒性.此外,在训练过程中,使用Adam 优化器来优化网络,其参数均采用默认值(β1=0.9和β2=0.999),并通过梯度裁剪策略加速网络收敛,训练800个epoch ,初始学习率为1e -4,总损失函数中α=1、γ=0.1和λ=0.01.采用Pytorch 包进行车牌雾图去雾网络结构代码的编写和训练,整个训练过程均在NVIDIA Tesla T4的显卡上运行.实验主要包括两个部分:其一,测试本文提出的车牌雾图去雾网络模型,其二,对其进行消融实验.上述实验在合成车牌雾图和自然拍摄的车牌雾图上进行测试,所有测试图像均随机选自OpenITS 提供的车牌图像数据库(与LPHaze Dataset 训练集中的数据不重合),并从测试结果中选取如下5种组合类型进行定性和定量分析,分别为(A =0.6,β=0.8)、(A =0.7,β=1.0)、(A =0.8,β=1.2)、(A =0.9,β=1.4)和(A =1.0,β=1.6),同时对其依次编号为组合A 到E.3结果与分析3.1测试实验1)合成车牌雾图去雾结果为了进一步评估算法性能,将本文算法与最近出现的经典去雾算法(基于引导滤波器的暗通道先验算法[4](GFDCP )、基于深度学习的端到端的去雾网络[8](DehazeNet )、端到端的一体化去雾网络(AODNet )、端到端门控上下文聚合去雾网络[23](GCANet )和端到端特征融合注意力去雾网络[17](FFANet ))进行比较.以上算法统一在LPHaze Data⁃set 的验证集上进行测试,并选取其中5类合成车牌雾图的测试结果进行实验分析,其结果见表2.由表2可以看出,在5类不同大气光和散射系数的合成车牌雾图上,与GCANet 得到的结果相比,在PSNR 均值上分别提高了5.64、6.74、8.84、10.52、11.88dB ,SSIM 均值上则分别提高了0.0368、0.0599、0.0991、0.1496、0.2225.同时,在图9中的PSNR 和24222018161412108P S N R (d B )组合A组合B组合C组合D组合EGFDCPDehazeNet AODNet GCANet FFANet 本文算法组合类型(a )6种算法在5类组合上的PSNR 均值曲线GFDCP DehazeNet AODNet GCANet FFANet 本文算法1.00.90.80.70.60.5S S I M组合A组合B组合C组合D组合E组合类型(b )6种算法在5类组合上的SSIM 均值曲线图96种算法在5类合成车牌雾图上的PSNR 和SSIM 均值曲线Fig.9PSNR and SSIM mean curves of 6algorithms on5types of composite license plate fog map表2合成车牌雾图去雾图像的PSNR (dB )/SSIM 均值Tab.2PSNR (dB )/SSIM mean of defogging image of composite license plate fog image组合类型(A =0.6,β=0.8)(A =0.7,β=1.0)(A =0.8,β=1.2)(A =0.9,β=1.4)(A =1.0,β=1.6)GFDCP20.75/0.946119.23/0.924817.85/0.900715.63/0.861612.70/0.8035DehazeNet19.31/0.895216.92/0.846014.20/0.793611.71/0.74509.57/0.6882AODNet13.79/0.775715.12/0.801314.60/0.745211.64/0.64818.01/0.5349GCANet18.86/0.925516.31/0.890613.82/0.845911.92/0.791310.14/0.7091FFANet18.09/0.894718.65/0.878419.31/0.851212.76/0.71678.61/0.5407本文算法24.50/0.962323.05/0.950522.66/0.945022.44/0.940922.02/0.9316131湖南大学学报(自然科学版)2022年SSIM 均值曲线图中亦可知,在重构图像质量方面,本文算法在处理不同雾度的合成车牌雾图上明显优于上述5类经典算法.最后,从合成车牌雾图的去雾图像中选取部分图片进行效果展示,如图10所示.从去雾效果中可以直观感受到,本文算法相较于其它算法而言,具有较少的雾度残留以及颜色失真.2)自然车牌雾图去雾结果本文还对实际拍摄的自然车牌雾图进行测试,并与上述5种经典算法的去雾效果进行视觉比较.该测试数据选自OpenITS 的车牌图像数据库,共915张实际拍摄的车牌雾图,视觉对比结果如图11所示.从图11可知:在处理常见的蓝底车牌雾图时,本文算法很少出现过度曝光和图像整体偏暗的问题,且雾度残留也很少;对于其它底色的车牌雾图(如图11中的黄底和蓝底车牌),本文算法在去雾效果上相较于上述5种经典算法仍能保持自身优势,并且在颜色、字符等图像信息上也能得到较好的恢复.(a )(A =0.6,β=0.8)(b )(A =0.7,β=1.0)(c )(A =0.8,β=1.2)(d )(A =0.9,β=1.4)(e )(A =1.0,β=1.6)合成雾图GFDCPDehazeNetAODNetGCANetFFANet本文算法干净图像图10合成车牌雾图去雾效果展示Fig.10Fog removal effect display of composite license plate图11实际拍摄的车牌雾图去雾效果展示比较Fig.11Comparison of defogging effect of actual license plate fog map自然车牌雾图GFDCP DehazeNet AODNet GCANet FFANet 本文算法132第6期陈炳权等:基于深度多级小波U-Net的车牌雾图去雾算法3.2不同模块对网络性能的影响为了分析其中各个模块的重要性,本文在LP⁃Haze Dataset数据集上进行消融研究分析,以基于ResGroup和“SOS”Block的MWCNN去雾网络作为基准网络模块,对于其他不同的网络模块,其简化形式及说明如下,R1:基于ResGroup和“SOS”Block的MWCNN作为基准网络,该网络不包含DCPA Block、LMSA和CSAE Block;R2:具有DCPA Block的基准网络;R3:具有DCPA Block、LMSA和CSAE Block的基准网络,即最终的车牌雾图去雾网络.上述网络模块均只训练150个epoch,且初始学习率均为1e-4,并在LPHaze Dataset的验证集上进行测试,其测试结果如表3所示.表3不同网络模块在LPHaze Dataset的验证集上测试结果的均值Tab.3Mean value of test results of different networkmodules on the verification set of LPHaze Dataset网络R1 R2 R3“SOS”Block√√√DCPABlock√√LMSA√CSAEBlock√PSNR/dB22.4722.4323.27SSIM0.94210.94320.9513由表3可得,在不加入DCPA Block、LMSA和CSAE Block的情形下,PSNR和SSIM的均值分别可以达到22.47dB和0.9421,而在加入三者后,PSNR 和SSIM均值则分别提升了0.8dB和0.0092,从而使网络能重建出高质量的去雾图像.3.3不同损失函数对网络性能的影响为了分析损失函数的有效性,本文算法分别采用L1、L2、L rh(即L1和L2损失的简单结合)和L total这四类损失函数来进行网络模型的训练,训练150个ep⁃och,且初始学习率为1e-4.分别在LPHaze Dataset的验证集上进行PSNR和SSIM的指标测试,其实验结果如表4所示.从表4可知,只使用L rh损失函数时,表4不同损失函数下车牌去雾网络测试结果Tab.4Test results of license plate defogging networkunder different loss functions损失函数L1L2L rhL total PSNR/dB22.7422.1923.0623.27SSIM0.94170.93710.94710.9513平均PSNR和SSIM可达到23.06dB和0.9471,且相较于单独使用L1或L2时均有着明显提升.而使用总损失函数L total时,平均PSNR和SSIM分别提升了0.21dB和0.0042,从而使网络性能得到较大的改善.4结论本文提出了一种基于深度多级小波U-Net的车牌雾图去雾算法,该算法以MWCNN作为去雾网络主体框架.首先,为了在小波域和空间域中尽可能地整合不同层级和尺度的图像特征,引入了“SOS”增强策略,并在MWCNN中间进行跨层连接,以此对图像特征进行整合、完善和优化;其次,本文将像素注意力、通道注意力和离散小波变换进行有效融合,从而尽可能去除车牌雾图中的雾度;最后,通过跨尺度聚合增强模块来弥补小波域和空间域之间存在的图像信息差异,进一步提高了重构图像质量.自然车牌雾图和LPHaze Dataset的验证集上的实验结果表明,在处理具有不同大气光照和雾度的车牌雾图上,本文算法具有较好的去雾表现,并且在处理具有不同底色的车牌雾图时具有一定的优势.参考文献[1]YADAV G,MAHESHWARI S,AGARWAL A.Foggy image en⁃hancement using contrast limited adaptive histogram equalizationof digitally filtered image:performance improvement[C]//2014In⁃ternational Conference on Advances in Computing,Communica⁃tions and Informatics(ICACCI).September24-27,2014,Delhi,India.IEEE,2014:2225-2231.[2]RUSSO F.An image enhancement technique combining sharpen⁃ing and noise reduction[J].IEEE Transactions on Instrumenta⁃tion and Measurement,2002,51(4):824-828.[3]GALDRAN A,BRIA A,ALVAREZ-GILA A,et al.On the dual⁃ity between retinex and image dehazing[C]//2018IEEE/CVF Con⁃ference on Computer Vision and Pattern Recognition.June18-23,2018,Salt Lake City,UT,USA.IEEE,2018:8212-8221.[4]HE K,SUN J,TANG X.Guided image filtering[C]//DANIILIDISK,MARAGOS P,PARAGIOS puter Vision-ECCV2010,11th European Conference on Computer Vision,Herak⁃lion,Crete,Greece,September5-11,2010,Proceedings,Part I.Springer,2010,6311:1–14.[5]HE K M,SUN J,TANG X O.Single image haze removal usingdark channel prior[C]//IEEE Transactions on Pattern Analysisand Machine Intelligence.IEEE,:2341-2353.[6]ZHU Q S,MAI J M,SHAO L.A fast single image haze removal133。

基于含噪Retinex模型的煤矿低光照图像增强方法

基于含噪Retinex模型的煤矿低光照图像增强方法

基于含噪Retinex 模型的煤矿低光照图像增强方法李正龙1,2, 王宏伟2,3,4, 曹文艳1,2, 张夫净1,2, 王宇衡1,2(1. 太原理工大学 矿业工程学院,山西 太原 030024;2. 太原理工大学 山西省煤矿智能装备工程研究中心,山西 太原 030024;3. 太原理工大学 机械与运载工程学院,山西 太原 030024;4. 山西焦煤集团有限责任公司 博士后工作站,山西 太原 030024)摘要:低光照图像会导致许多计算机 视觉任务达不到预期效果,影响后续图像分析与智能决策。

针对现有煤矿井下低光照图像增强方法未考虑图像现实噪声的问题,提出一种基于含噪Retinex 模型的煤矿低光照图像增强方法。

建立了含噪Retienx 模型,利用噪声估计模块(NEM )估计现实噪声,将原图像和估计噪声作为光照分量估计模块(IEM )和反射分量估计模块(REM )的输入,生成光照分量与反射分量并对二者进行耦合,同时对光照分量进行伽马校正等调整,对耦合后的图像及调整后的光照分量进行除法运算,得到最终的增强图像。

NEM 通过3层CNN 对含噪图像进行拜耳采样,然后重构生成与原图像大小一致的三通道特征图。

IEM 与REM 均以ResNet −34作为图像特征提取网络,引入多尺度非对称卷积与注意力模块(MACAM ),以增强网络的细节过滤能力及重要特征筛选能力。

定性和定量评估结果表明,该方法能够平衡光源与黑暗环境之间的关系,降低现实噪声的影响,在图像自然度、真实度、对比度、结构等方面均具有良好性能,图像增强效果优于Retinex −Net ,Zero −DCE ,DRBN ,DSLR ,TBEFN ,RUAS 等模型。

通过消融实验验证了NEM 与MACAM 的有效性。

关键词:煤矿低光照图像;图像增强;含噪 Retinex 模型;噪声估计;拜耳采样;多尺度非对称卷积;注意力模块中图分类号:TD67 文献标志码:AA method for enhancing low light images in coal mines based on Retinex model containing noiseLI Zhenglong 1,2, WANG Hongwei 2,3,4, CAO Wenyan 1,2, ZHANG Fujing 1,2, WANG Yuheng 1,2(1. College of Mining Engineering, Taiyuan University of Technology, Taiyuan 030024, China ; 2. Center of Shanxi Engineering Research for Coal Mine Intelligent Equipment, Taiyuan University of Technology, Taiyuan 030024,China ; 3. College of Mechanical and Vehicle Engineering, Taiyuan University of Technology, Taiyuan 030024,China ; 4. Postdoctoral Workstation, Shanxi Coking Coal Group Co., Ltd., Taiyuan 030024, China)Abstract : The low light images can lead to many computer vision tasks not achieving the expected results.This can affect subsequent image analysis and intelligent decision-making. The existing low light image enhancement methods for underground coal mines do not consider the real noise of the image. In order to solve this problem, a coal mine low light image enhancement method based on Retinex model containing noise is proposed. The Retienx model containing noise is established. The noise estimation module (NEM) is used to estimate real noise. The original image and estimated noise are used as inputs to the illumination component收稿日期:2022-08-16;修回日期:2023-03-29;责任编辑:胡娴。

基于负修正和对比度拉伸的快速去雾算法

基于负修正和对比度拉伸的快速去雾算法

基于负修正和对比度拉伸的快速去雾算法王琳;毕笃彦;李晓辉;何林远【摘要】针对目前主流去雾方法算法复杂度高而难以满足实时性需求的问题,提出一种提高雾天图像对比度、保持图像颜色的快速算法.对输入图像的负像进行对比度拉伸间接提升雾天图像的对比度,达到了节约运算时间的效果.针对由Lipschitz 系数得到的图像结构信息设置自适应的参数,参数设置由关于Lipschitz系数的函数和关于局部像素块亮度平均值的函数两部分组成,最后利用Sigmoid函数自适应地拉伸图像,能够得到色彩自然、细节清晰的无雾图像,相对于He算法,所提算法在速度方面提升了约90%.实验结果表明,该算法在保证实时性的同时达到了较好的主观视觉愉悦性,更符合实际工程应用的要求.【期刊名称】《计算机应用》【年(卷),期】2016(036)004【总页数】5页(P1106-1110)【关键词】快速图像去雾;负修正;对比度拉伸;Lipschitz指数;Sigmoid函数【作者】王琳;毕笃彦;李晓辉;何林远【作者单位】空军工程大学航空航天工程学院,西安710038;空军工程大学航空航天工程学院,西安710038;中国人民解放军济南空军航空修理厂,济南250021;空军工程大学航空航天工程学院,西安710038【正文语种】中文【中图分类】TP391.9近年来,由于环境污染的加剧,雾霾现象越来越成为人们关注的焦点。

在雾霾天气下,光由于大气中存在大量悬浮粒子发生散射现象,造成图像颜色失真、对比度下降,使能见度水平显著降低,限制了各种户外机器视觉系统的功能,如:视频监控、城市交通、室外目标识别、卫星遥感成像、航拍等[1]。

为解决上述问题,国内外学者提出了众多图像去雾的方法。

现有的图像去雾技术主要针对单幅图像,可以从是否依赖大气散射模型将现有算法分为两类:基于物理模型的方法和基于非物理模型的方法。

基于物理模型的方法主要利用大气散射模型,通过从已知有雾图像当中反求解场景反照率来复原图像,是一个求解病态反问题的过程,比较依赖数据假设和先验信息。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Effective Image Haze Removal Using Dark Channel Prior And Post-processingSoo-Chang Pei1, Tzu-Yen Lee2l Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan R.O.C.2 Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan R.O.C.E-mail : 1 pei@.tw, 2 r99942049@.twAbstract—Removing haze technique is an important and necessary procedure to avoid ill-condition visibility of human eyes. To improve the dehazing quality, a new haze-removing method is proposed in this paper. Specifically, this method can be properly applied to various dense and distribution haze images without sacrificing color naturalness by using refined Dark Channel Prior(DCP) and adding post processing. Experimental results demonstrate that the new method provides higher dehazing quality and color naturalness than other existing techniques.I.INTRODUCTIONHaze removal is a challenging problem because of the unknown transmission and ill-posed conditions. The key characteristics of light such as intensity and color are changed by its interactions with the atmosphere. These interactions can be classified into three categories: scattering, absorption, and emission. As a result, haze occurs due to atmospheric absorption and scattering, at which the irradiance received by the camera from the scene point is attenuated along the line of sight; the incoming light is blended with the airlight.Dehazing is the process of removing haze effects from capturing images and recovering the original details and colors of natural scenes. A computationally-efficient haze-removal algorithm was proposed in [2] which has the problem of the unknown depth information, and it needs multiple images or extra information to estimate the unknown depth information. In order to overcome the drawbacks, the single image algorithms have been investigated in [1, 3, 4]. Fattal [4] estimated the albedo of scene and inferred the medium transmission. However, this approach cannot well handle heavy haze images and may fail in the cases that the assumption is invalid. Tan [3] maximizes the local contrast of the restored image which assumes that a dehazed image should have high contrast. Unfortunately, Tan’s algorithm often produces halo artifacts and overstretches contrast. The approach of He et al. [1] is the most attracting one based on the dark channel prior. The dark channel prior means that at least one color channel should have almost zero pixel values within a haze-free image. By applying the method, we can obtain impressive results, but the color tone might be modified due to inaccurate estimation of airlight and the patch operation. Unlike the conventional methods mentioned before, the proposed method can avoid the block and contouring effects. Moreover, it can reduce the complexitythat increases from refining the transmission. Thus, the restoration performance of the proposed method yields high quality images, and it is more robust than the approach ofHe et al. [1].This paper is organized as follows: Section Ⅱ briefly reviews the haze image model and dark channel prior. Section III presents the proposed dehazing algorithm applicable to all conditions of haze images. The experiment results are presented in Section Ⅳ. Finally, Section Ⅴconcludes the paper.II.PRELIMANARIESThe proposed method of this paper is based on the haze image model [2, 3, 4, 5] and the haze removing method named dark channel prior [1]. In the following sub-sections, these fundamental concepts of the proposed method will be briefly introduced.A. Haze Image ModelThe haze image model proposed by McCartney [2, 3, 4,5] is widely used in computer vision described as()()(1-())I x t x J t x A=+(1), where I is the observed intensity of the haze image, x is the pixel’s index, J is the scene radiance of the haze-free image which is desired to be obtained by haze-free techniques, t isthe medium transmission expressed part of light that is not scattered and reaches the observer, and A is the global atmospheric light.B.Dark Channel PriorThe dark channel prior is based on the assumption thatin the most of the non-sky patches contain some pixels which have very low intensities in at least one color channel. Therefore, the darkest scene radiance (J) of the x thpixel is expressed as{,,}()()min(min(()))dark cc r g b y xJ x J y∈∈Ω=(2), where c J is a color channel of J and Ω(x) is a local patch centered at x. Thus, the darkJ is so called the dark channelof J, and the statistical observation is named as dark channel prior.The flow chart of the dark channel prior [1] is given inFig.1. In the 1st step, the estimated atmospheric light (A) is acquired by choosing the top 0.1% brightest pixels in the dark channel which are the most haze-opaque pixels. In the 2nd step, the coarse estimated transmission is()()'()1min(min ())cc c y x I y t x w A∈Ω=− (3), where w (01w <≤) is a constant to prevent the full elimination of haze; the full elimination of haze would result in an unnatural image and the loss of depth perception. In the 3rdstep, estimated transmission map is refined by applying the soft matting method [6], and we express the refined '()t x as ()t x . Finally, the estimated scene radiance is0()()max((),)I x A Jx A t x t −=+ (4), where t 0 is the lower bound of the refined medium transmission. By applying the steps of dark channel prior method [1], the haze removal results can be obtained. However, the block effect and soft matting complex computation are disadvantages of this method, then the proposed method is provided to solve these problems. III. PROPOSED METHODThe flow chart of the proposed dehazing technique is shown in Fig. 2 which refines the 1st to 3rd steps in Fig. 1, and the post-processing is added for increasing the robustness of the proposed method. A. Refined Dark Channel Prior MethodFor acquiring higher quality haze-free images, the 1st to 3rd steps of the dark channel prior method [1] are modified in this sub-section.First, we replace the patch operation with pixel-wise operation to overcome the block effect, contouring effect and the inaccurate estimation of the sky-region in (2). Thus, the pixel-wise operating form of (2) is expressed as{,,}()()min (min (()))dark c c r g b y pixel x J x J y ∈∈= (5). After replacing the operating mode of (2) by (5), the 2nd step of the dark channel prior [1] method is modified as ()()''()1min (min ())c c c y pixel x I y t x w A ∈=−(6).Fig. 1. Flow chart of the dark channel prior[1].Fig. 2. Flow chart of the proposed dehazing technique.where c A is the global atmospheric light estimated from the 1st step in Fig. 1. In this paper, all pixels from the darkchannel are sorted initially in ascending order as _dark order J by intensity values. Then, we choose 96% intensity of the pixel values (i.e., 96% value in _dark order J ) as the airlight value to prevent from choosing some wrong high-light objects as white cars and buildings. Note that the modified method can solve problems caused by the patch operation which are mentioned before.For decreasing the complexity of the soft matting in the 3rd step, the guided filter [9] is employed to smooth the transmission map while capturing the sharp edgediscontinuities and outline the profile of objects. Guidedfilter is an efficient linear-time algorithm that can provide exact solutions and the computational complexity depends on its size of kernel. The refined transmission of (6) is represented as:1'()(''())||kk k k x W t x a t x b W ∈=+∑ (7),where |W| is the number of the pixels in W k , W k is a window centered at the pixel k , and (a k ,b k ) are some linear coefficients assumed to be constant in W k . By applying the guided filter, the refined transmission map '()t x can be obtained, then the estimated and improved scene radiance '()J x c an be acquired by (4).0()'()I x AJx A −=+ (8).B. Post-ProcessingFor making the proposed haze-removal method morerobust, the post-processing is added for handling the situation that the haze of the input image is non-uniformdistribution with insufficient brightness. Thus, the multi-scale retinex (MSR) [7] and bilateral filter in local contrastcorrection (BFLCC) [8] are applied to perform the post-processing. The MSR method suits for handling the casethat the sky is covered most of the haze image and theBFLCC method suits for handling the case that the sky is not the main component of the haze image. Note that bothof the post processing approaches can remove the haze clearly. In the sky region, the color of MSR would be more natural than the BFLCC. However, the BFLCC would be better than MSR in non-sky region since it can avoid some artifacts simultaneously. Fig. 3 compares the dehazed results with our post processing approaches: MSR andBFLCC. Choosing appropriate post processing method isfurther investigated under two different cases.Case 1: The MSR method [7] is composed of the different scales of the Gaussian functions, and it adds different weights to the adjacent region with various sizes. For keeping the image edge and the color information simultaneously, the MSR method combines N various surround constants expressed as1(,)(,) {,,}i Ni n n n M x y w R x y i R G B ==∈∑ (9) where M is the output image, x and y are the pixel’s vertical and horizontal indexes, i is the index of the spectral band,w n denotes the weight associated with the n th scale, and i n Rdenotes the single-scale retinex of the i th component of the n th scale represented as(,)log((,))log((,)(,))''i n i i n R x y x y x y F x y J J =−∗ (10), where F n (x,y) denotes the Gaussian surround function givenas{}222(,)exp ()/n n F x y K x y c =−+ (11), where c is the Gaussian surround space constant, and K is applied to normalize the surround function (11) . Case 2: The BFLCC method[8] is improved from the gamma correction which replaces an inverted Gaussian low-pass filtered by an inverted low-pass, filtered with a bilateral filter which is expressed as(,)(128)128(,)(,)255[]255'BFmask x y x y O x y J α−= (13). where x and y are the pixel’s vertical and horizontal indexes, O is the output image, α is a parameter depending on the image properties and BF mask . Note that BF mask is defined over a window of size (21)(21)K K +×+ which is given by the bifiltered image of the inverted version of the input'(,)255(,)'inv x y x y J J=− (14). IV. EXPERIMENTAL RESULTSFor comparing the experimental results, the source ofthe haze images are taken from [1] and the ROHM Semi-conductor. Furthermore, we also present dehazing results for real hazed pictures of Huang Shan by applying different post-processing method in Fig. 3. Besides, the experiment is separated into 2 cases: Case 1: the input image has sufficient brightness and uniformly distributed haze. Case 2: the input image has insufficient brightness and non-uniformly distributed haze.In Case 1, the input image is dehazed by applying only the refined dark channel prior method shown in Fig. 2without post processing. The haze removal images are shown in Fig. 4 and Fig. 5. In Fig. 4, the proposed and He et al. [1] approaches are compared, and it shows that the result of the He et al. is more fuzzy and darker (Ex: brick house part), but the proposed method would not have such appearance. In Fig. 5, the proposed approach is compared with the ROHM Company’s haze removal image, and it can be observed that the ROHM Company’s result wouldlead the over-exposure happened in the back of the cable car. Besides, the color of the mountains is unnatural. Theproposed method recovers the problems. In Case 2, the input image is dehazed by applying the refined dark channel prior method with post processingshown in Fig. 2 for ensuring robustness. The experimental results are compared with He et al. [1]. As shown in Fig. 6,the results of He et al. are more fuzzy and darker. Some of the regions could not be recovered clearly (Ex: far region), that is, the quality of the output haze removal images would be lower. In contrast, the proposed method can acquire clearer and brighter dehazed images. The recovered degreeof the far region is better than He et al. results[1] (Fig.6(b)).In Fig. 6, we also compare the results of applying the post-processing (BFLCC) without dehazing (Fig.6(c)) and with dehazing procedure. (Fig.6(d))V. CONCLUSIONSAn attractive haze-removing method with high dehazing quality is proposed in this paper. Specifically, the proposed method is robust and provides a reliable hazeremoval quality for various cases of haze images. Compared with other existing dehazing methods, theproposed method is superior in dehazing quality and colornaturalness. In the future, some algorithmic refinements could be applied and then integrated into the haze-removing method to decrease the complexity and improve the dehazing image quality.(a) (b) (c)Fig. 3. Insufficient brightness and non-uniform distributed haze: comparison with MSR and BFLCC post-processing. (a) input haze image of Huang Shan picture. (b) final haze-free image after applying MSR post-processing. (c) final haze-free image after applying BFLCC post-processing.(a) (b) (c)Fig. 4 : Sufficient brightness and uniform distributed haze: comparison with He’s work[1]. (a) input haze image. (b) He et al.’s result. (c) proposed methodresult.(a) (b) (c)Fig. 5: Sufficient brightness and uniform distributed haze: comparison with ROHM’s IC-BU6521KV. (a) haze image. (b) ROHM Company’s result. (c)proposed method result.(a)(b)(c) (d) Fig. 6: Insufficient brightness and non-uniform distributed haze: comparison with He’s work [1]. (a) input haze images. (b) He et al.’s results. (c) the results ofapplying the BFLCC without dehazing procedure. (d) the results of applying the proposed method.R EFERENCES[1]Kaiming.He, J.Sun, and X.Tang, “Single Image Haze RemovalUsing Dark Channel Prior,” IEEE Conference on Computer Visionand Pattern Recognition, pp. 1956-1963, 2009.[2]S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,”International Journal of Computer Vision, vol. 48, no. 3, pp. 233-254, 2002.8.[3]R. Tan., “Visibility in bad weather from a single image,” IEEEConference on Computer Vision and Pattern Recognition, pp. 1–8,2008[4]R. Fattal, “Single image dehazing,” ACM SIGGRAPH, pp. 1–9,2008.[5]S. G. Narasimhan and S. K. Nayar, “Chromatic framework forvision in bad weather,” IEEE Conference on Computer Vision andPattern Recognition, pp. 598–605, 2000.[6] A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution tonatural image matting,” IEEE Conference on Computer Vision andPattern Recognition, pp. 61–68, 2006.[7] D. Jobson, Z. Rahman, and G. Woodell, “A multiscale retinex forbridging the gap between color images and the human observationof scenes,” IEEE Transaction on Image Processing, pp. 965–976,1997.[8]Raimondo Schettini, Francesca Gasparini, Silvia Corchs andFabrizio Marini, “Contrast image correction method,” Journal ofElectronic Imaging 19(2), 023005, 2010.[9]Kaiming He, Jian Sun and Xiaoou Tang, “Guided Image Filtering,”Lecture notes in Computer Science, Vol 6311, 2010.。

相关文档
最新文档