FAST BILATERAL FILTERING BY ADAPTING BLOCK SIZE
混合Filter与改进自适应GA的特征选择方法
20215711特征选择是指为了降低数据维度,在保证特征集合分类性能的前提下,从原始特征集合中选出具有代表性的特征子集。
对于特征选择方法,按照分类器在算法选择特征过程中的参与方式进行分类,可将其分为三类:过滤式(Filter)、包装式(Wrapper)和嵌入式(Embedded)。
过滤式的特征选择方法先对初始特征进行过滤,再用过滤后的特征训练模型,所以过滤式方法有计算量小、易实现的优点,但分类精度较低;包装式的特征选择方法由于在其特征选择过程中需要多次训练分类器,计算开销通常比过滤式特征选择要大得多,但分类效果要好于过滤式;嵌入式特征选择在分类器训练过程中将自动地进行特征选择,利用嵌入式特征选择,分类效果明显但参数设置复杂且时间复杂度较高。
遗传算法(GA)是一种基于种群的迭代的元启发式优化算法,对初始化个体通过算法的编码技术和一些基本的遗传算子选择、交叉、变异等操作,依据个体的适应度值进行选择遗传,经过迭代,得到适应度最高的个体[1]。
遗传算法以生物进化为原型,具有良好的全局搜混合Filter与改进自适应GA的特征选择方法邱云飞,高华聪辽宁工程技术大学软件学院,辽宁葫芦岛125100摘要:针对高维度小样本数据在特征选择时出现的维数灾难和过拟合的问题,提出一种混合Filter模式与Wrapper 模式的特征选择方法(ReFS-AGA)。
该方法结合ReliefF算法和归一化互信息,评估特征的相关性并快速筛选重要特征;采用改进的自适应遗传算法,引入最优策略平衡特征多样性,同时以最小化特征数和最大化分类精度为目标,选择特征数作为调节项设计新的评价函数,在迭代进化过程中高效获得最优特征子集。
在基因表达数据上利用不同分类算法对简化后的特征子集分类识别,实验结果表明,该方法有效消除了不相关特征,提高了特征选择的效率,与ReliefF算法和二阶段特征选择算法mRMR-GA相比,在取得最小特征子集维度的同时平均分类准确率分别提高了11.18个百分点和4.04个百分点。
fast motion deblurring 中文翻译
快速运动去模糊摘要本文介绍了一种针对只几秒钟功夫的大小适中的静态单一影像的快速去模糊方法。
借以引入一种新奇的预测步骤和致力于图像偏导而不是单个像素点,我们在迭代去模糊过程上增加了清晰图像估计和核估计。
在预测步骤中,我们使用简单的图像处理技术从估算出的清晰图像推测出的固定边缘,将单独用于核估计。
使用这种方法,前计算高效高斯可满足对于估量清晰图像的反卷积,而且小的卷积结果还会在预测中被抑制。
对于核估计,我们用图像衍生品表示了优化函数,经减轻共轭梯度法所需的傅立叶变换个数优化计算过数值系统条件,可更加快速收敛。
实验结果表明,我们的方法比前人的工作更好,而且去模糊质量也是比得上的。
GPU(Graphics Processing Unit图像处理器)的安装使用程。
我们还说明了这个规划比起使用单个像素点需要更少的更加促进了进一步的提速,让我们的方法更快满足实际用途。
CR(计算机X成像)序列号:I.4.3[图像处理和计算机视觉]:增强—锐化和去模糊关键词:运动模糊,去模糊,图像恢复1引言运动模糊是很常见的一种引起图像模糊并伴随不可避免的信息损失的情况。
它通常由花大量时间积聚进入光线形成图像的图像传感器的特性造成。
曝光期间,如果相机的图像传感器移动,就会造成图像运动模糊。
如果运动模糊是移位不变的,它可以看作是一个清晰图像与一个运动模糊核的卷积,其中核描述了传感器的踪迹。
然后,去除图像的运动模糊就变成了一个去卷积运算。
在非盲去卷积过程中,已知运动模糊核,问题是运动模糊核从一个模糊变形恢复出清晰图像。
在盲去卷积过程中,模糊核是未知的,清晰图像的恢复就变得更加具有挑战性。
本文中,我们解决了静态单一图像的盲去卷积问题,模糊核与清晰图像都是由输入模糊图像估量出。
单一映像的盲去卷积是一个不适定问题,因为未知事件个数超过了观测数据的个数。
早期的方法在运动模糊核上强加了限制条件,使用了参数化形式[Chen et al. 1996;Chan and Wong 1998; Yitzhaky et al. 1998; Rav-Acha and Peleg2005]。
基于自适应对偶字典的磁共振图像的超分辨率重建
L I U Z h e n - q i , B A 0 L i - j u n , C HE N Z h o n g
r De p a r t m e n t o f E l e c t r o n i c S c i e n c e , X i a me n U n i v e r s i t y , Xi a me n 3 6 1 0 0 5 , C h i n a )
刘振 圻 , 包立君 , 陈 忠
( 厦 门大学电子科 学系, 福建 厦门 3 6 1 0 0 5 )
摘 要: 为了提高磁共振成像的图像 质量 , 提 出了一种基于 自适应对偶字典的超分辨率 去噪重建方法 , 在超分辨率重建过程 中引入去噪功能 , 使 得改善图像 分辨率的同时能够有效地滤除 图像 中的噪声 , 实现 了超分辨率重建和去噪技术 的有机结合 。该 方法利用聚类一P c A算 法提取图像的主要特征来构造主特征字典 , 采用 训练方法设计 出表达图像 细节信 息的 自学 习字 典 , 两者 结合构成的 自适应对偶字典具有 良好 的稀疏度和 自适应性 。实验表 明, 与其他超分辨率算法相 比, 该方法超分辨率重建效果显 著, 峰值信噪 比和平均结构相似度均有所提高。
第2 8 卷第 4 期
2 0 1 3 年8 月
பைடு நூலகம்光 电技术 应 用
EL ECT RO一 0P T I C T ECHNOLOGY AP P LI CAT1 0N
V O1 . 28. NO. 4
Au g u s t , 2 01 3
・
信号 与信息处理 ・
基 于 自适应对偶 字典的磁共振 图像 的超 分辨率重建
相机成像中一种低噪点的宽动态范围算法
Image & Multimedia Technology ・图像与多媒体技术Electronic Technology & Software Engineering 电子技术与软件工程• 83【关键词】宽动态范围 相机成像 算法相机成像中一种低噪点的宽动态范围算法文/孙凤军1 徐孝天21 引言一般而言,人眼的动态范围是0-10000cd/m 2,然而相机的动态范围一般只有100cd/m 2。
宽动态范围这种技术就是专门用来扩大相机的动态范围,从而来模拟人眼的这种功能。
宽动态范围(WDR )成像技术有时也被称为高动态范围(HDR )成像技术。
近年来,相机被广泛地应用,因此图像的质量也在快速地变得更好,但是宽动态范围成像技术却没有广泛的应用。
这种技术常被用在高端的相机中,而在手机的成像中应用很少。
其原因是大多数的WDR 算法存在计算量大的性能问题。
虽然现在有多种的WDR 成像技术,但每种都有自己的弱点,如何克服这些问题是目前研究的重点课题。
例如如何克服帧率下降、噪点增加、图像处理时间长、芯片性能要求高以及很长的开发周期等问题。
本文将展示一种非常简单的WDR 算法集成在一起的电路设计,克服了以上所涉及的缺点。
在照相机的图片中,人们希望在明亮的区域看到饱和的图像。
因此在过度曝光的部分,通过减少过亮的像素来使图像更真实;但这样会导致本来比较暗的区域变得更暗。
为了让这些黑暗的区域变亮,增益会被使用。
但是增益增大之后,大量的噪点会被带入到图像中,这就是数字化WDR 的一种副作用。
为了减少图像噪点,采用了以周围的像素来联合计算当前像素的办法。
由于这些图像噪点都是临时噪点,由很小的随机数组成,因此可以用取平均值的方法来降低这些噪点。
WDR 技术并不用在所有的图像上,而是被用在高对比度的图像中。
一般来说,对比度越高,则使用的WDR 技术越复杂。
因此一般WDR 算法中,根据图像对比度高低来调控算法的控制因子多而复杂。
基于深度强化学习的自适应滤波算法研究
基于深度强化学习的自适应滤波算法研究一、引言自适应滤波是指根据信号统计特征,设计出适合当前信号的滤波器。
该技术可用于信号去噪、信号特征提取、信号恢复等领域。
目前,基于深度强化学习的自适应滤波算法受到了广泛关注,并在音频处理、图像处理、控制系统等领域得到了广泛应用。
本文将介绍基于深度强化学习的自适应滤波算法的研究现状与发展方向。
二、自适应滤波的原理及分类自适应滤波是一种根据输入信号的性质调节滤波器响应的方法。
其基本原理是利用输入信号的统计性质、峰值、均值、方差等,调节滤波器的响应特性,使其更加适应当前输入信号的特征。
常用的自适应滤波算法包括最小均方算法(LMS)、归一化LMS算法(NLMS)、递推最小平方算法(RLS)等。
根据滤波器结构,自适应滤波可分为线性自适应滤波与非线性自适应滤波。
线性自适应滤波采用线性滤波器的结构,其输入信号通过滤波器后,输出信号为输入信号与滤波器系数的卷积。
非线性自适应滤波器则不限于线性滤波器的结构,它可以根据需要设计任意结构的滤波器,如模糊滤波器、小波滤波器。
三、深度强化学习及其在自适应滤波中的应用深度强化学习是深度学习与强化学习结合的一种自适应学习方法。
在深度强化学习中,智能体通过与环境的交互,学习如何在特定任务中最大化期望的长期回报。
深度强化学习在语音识别、图像处理、游戏AI、智能机器人等领域得到了广泛应用。
深度强化学习在自适应滤波中的应用主要是基于卷积神经网络(CNN)和循环神经网络(RNN)的结构。
深度强化学习网络利用无监督学习方法,从大量数据中自主学习滤波器的响应特征和滤波器系数。
由于其能够自适应地提取信号的特征,它可以更加准确地去除噪声,从而提高滤波效果。
在实践中,深度强化学习在图像去噪、语音去噪、控制系统等领域得到了广泛应用。
深度强化学习的一个优点是可以取代传统的自适应算法。
传统的自适应滤波器需要在每个时间步骤上计算估计信号,而基于深度强化学习的滤波器可以直接利用输入信号进行学习,省去了估计信号的过程,大大提高了滤波器的运算速度。
Adaptive Gaussian Guided Filter
1559-128X/14/265985-10$15.00/0 © 2014 Optical Society of America
tube camera has been established in our laboratory
[16]. However, the raw streak image obtained on the
基于视频自适应采样的快速图像检索算法
第 22卷第 7期2023年 7月Vol.22 No.7Jul.2023软件导刊Software Guide基于视频自适应采样的快速图像检索算法谭文斌1,黄贻望1,2,刘声1(1.铜仁学院大数据学院,贵州铜仁 554300; 2.贵州大学贵州省公共大数据重点实验室,贵州贵阳 550025)摘要:为解决智慧农业监控系统目标图像检索计算量较大、耗时较长的问题,提出一种视频自适应采样算法。
首先,根据视频相邻帧相似度变化情况自适应调整视频帧的采样率以提取视频关键帧,确保提取的关键帧能替代相邻帧参与目标图像检索计算。
然后,将视频关键帧以时间为轴构建视频帧检索算子,代替原视频参与目标图像检索计算,从而减少在视频中检索目标图像时的大量重复计算,达到提升检索效率的目的。
实验表明,自适应采样算法相较于固定频率采样、极小值关键帧算法所构建的视频帧检索算子检出率更高、更稳定。
在确保图像被全部检出的基础上,使用视频帧检索算子替代原视频参与目标图像检索计算的优化幅度较大,时耗减少了60%以上,对提升智慧农业监控系统中目标图像的检索效率具有重要意义。
关键词:自适应采样;图像相似度;目标图像帧;视频帧检索算子DOI:10.11907/rjdk.231260开放科学(资源服务)标识码(OSID):中图分类号:TP391.41 文献标识码:A文章编号:1672-7800(2023)007-0131-07A Fast Image Retrieval Algorithm Based on Video Adaptive SamplingTAN Wenbin1, HUANG Yiwang1,2, LIU Sheng1(1.College of Data Science, Tongren University, Tongren 554300, China;2.Guizhou Provincial Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China)Abstract:To solve the problem of high computational complexity and time-consuming target image retrieval in smart agricultural monitoring systems, a video adaptive sampling algorithm is proposed. Firstly, adaptively adjust the sampling rate of video frames based on changes in sim‐ilarity between adjacent frames to extract video keyframes, ensuring that the keyframes extracted by the algorithm can replace adjacent frames in target image retrieval calculations. Then, a video frame retrieval operator is constructed based on the time axis of the video keyframes, re‐placing the original video to participate in the target image retrieval calculation, thereby reducing a large number of repeated calculations when retrieving the target image in the video, and achieving the goal of improving retrieval efficiency. Experiments have shown that the adaptive sam‐pling algorithm has a higher and more stable detection rate than the video frame retrieval operator constructed by fixed frequency sampling and minimum keyframe algorithms. On the basis of ensuring that all images are detected, using video frame retrieval operators to replace the origi‐nal video in the calculation of target image retrieval has a significant optimization range, reducing time consumption by more than 60%, and is of great significance for improving the retrieval efficiency of target images in smart agricultural monitoring systems.Key Words:adaptive sampling; image similarity; target image frame; video frame retrieval operators0 引言近年来随着智慧农业的兴起,种植园逐步实现无人化、自动化和智能化管理。
去噪点的方法
去噪点的方法Noise reduction is a common challenge in various fields, including photography, audio recording, and signal processing. There are several methods to address this issue, each with its own advantages and disadvantages.去噪是摄影、音频录制和信号处理等各个领域普遍面临的问题。
要解决这个问题,有几种方法可供选择,每种方法都有其优缺点。
One approach to noise reduction is filtering, which involves using algorithms to remove unwanted frequencies or signals from the data. This method is effective in certain scenarios, but it may also result in the loss of important information or introduce unwanted artifacts.一种去噪的方法是滤波,即利用算法从数据中去除不需要的频率或信号。
这种方法在某些情况下很有效,但也可能导致重要信息的丢失或引入不需要的伪影。
Another common method is spectral subtraction, which involves estimating the noise spectrum and subtracting it from the originalsignal. While this approach can be effective in certain situations, it also relies on accurate noise estimation, which can be challenging in real-world scenarios.另一种常见的方法是频谱减法,即估计噪声频谱并从原始信号中减去。
HDR介绍
Encoding/decoding should be done using partial-precision instructions
纹理格式固有的舍入误差更大 Rounding errors inherent in the texture format are much bigger
5, What does HDR require?
Dazzling Light (耀眼的光源)
Dazzling Reflection (耀眼的反射)
HDR Fresnel
在低反射率材质上 的明亮反射(Bright
reflection off low-reflectance surfaces)
Exposure Control(曝光控制)
具有真实感的景深效果(HDR Depth-of-Field)
2, What can be done with HDR?
Dazzling light(耀眼的光源) Dazzling reflection(耀眼的反射) Fresnel reflection(菲涅耳反射)
bright reflection in the normal direction
Exposure control effects(曝光控制) Realistic depth-of-field effects Realistic motion blur
RGB到E8R8G8B8格式转化 (E8R8G8B8 Encoding (HLSL) )
// a^n = b #define LOG(a, b) ( log((b)) / log((a)) ) #define EXP_BASE (1.06) #define EXP_OFFSET (128.0) // Pixel Shader (6 instruction slots) // rgb already exposure-scaled float4 EncodeHDR_RGB_RGBE8(in float3 rgb) { // Compute a common exponent float fLen = dot(rgb.rgb, 1.0) ; float fExp = LOG(EXP_BASE, fLen) ; float4 ret ; ret.a = (fExp + EXP_OFFSET) / 256 ; ret.rgb = rgb / fLen ; return ret ; } // More accurate encoding #define EXP_BASE (1.04) #define EXP_OFFSET (64.0) // Pixel Shader (13 instruction slots) float4 EncodeHDR_RGB_RGBE8(in float3 rgb) { float4 ret ; // Compute a common exponent // based on the brightest color channel float fLen = max(rgb.r, rgb.g) ; fLen = max(fLen, rgb.b) ; float fExp = floor( LOG(EXP_BASE, fLen) ) ; float4 ret ; ret.a = clamp( (fExp + EXP_OFFSET) / 256, 0.0, 1.0 ) ; ret.rgb = rgb / pow(EXP_BASE, ret.a * 256 - EXP_OFFSET) ; return ret ; }
点云模型的噪声分类去噪算法
点云模型的噪声分类去噪算法李鹏飞;吴海娥;景军锋;李仁忠【摘要】针对三维点云模型数据在去噪平滑过程中存在的不同尺度噪声和算法计算耗时问题,提出了点云模型的噪声分类去噪算法。
该算法根据噪声点分布特性,将其分为大尺度和小尺度噪声,先利用统计滤波结合半径滤波去除大尺度噪声;然后使用快速双边滤波对小尺度噪声进行平滑,实现点云模型的去噪和平滑。
与传统的双边滤波相比,利用快速双边滤波对点云模型数据进行平滑,有效地提高了计算效率。
实验结果表明,该算法对点云噪声进行快速平滑去除的同时又能有效地保持被扫描物体的几何特征。
%Aiming at the problems that different scale noise exists in denoising and smoothing of 3D point cloud data and time consuming of algorithm, the denoising algorithm for point cloud data based on noise classification is proposed. According to the distribution characteristics, the noise points are divided into large-scale and small-scale noise. Firstly, the large-scale noise is removed by statistical filtering and radius filtering. Then the small-scale noise is smoothed with fast bilateral filtering. Finally, the purpose of denoising and rapid smoothing for 3D point cloud data are achieved. Compared with the traditional bilateral filtering, the computing efficiency is improved using fast bilateral filtering to smooth the point cloud data. The experimental results show that the proposed algorithm can fast denoise and smooth for 3D point cloud data, which can effectively maintain the geometric features of the scanned object.【期刊名称】《计算机工程与应用》【年(卷),期】2016(052)020【总页数】5页(P188-192)【关键词】点云去噪;快速双边滤波;统计滤波;条件滤波;平滑【作者】李鹏飞;吴海娥;景军锋;李仁忠【作者单位】西安工程大学电子信息学院,西安 710048;西安工程大学电子信息学院,西安 710048;西安工程大学电子信息学院,西安 710048;西安工程大学电子信息学院,西安 710048【正文语种】中文【中图分类】TP391.41LI Pengfei,WU Hai’e,JING Junfeng,et al.Computer Engineering and Applications,2016,52(20):188-192.随着点云数据在三维实体造型和虚拟现实中得到越来越广泛的应用,高效率、高精度的数据处理[1]和三维模型重建[2-3]已经成为逆向工程领域的热点研究内容。
Bilateral Filtering for Gray and Color Images 双边滤波器
Bilateral Filtering for Gray and Color ImagesC.TomasiR.Manduchi Computer Science DepartmentInteractive Media Group Stanford University Apple Computer,Inc.Stanford,CA 94305Cupertino,CA 95014tomasi@manduchi@AbstractProceedings of the 1998IEEE International Conference on Computer Vision,Bombay,IndiaBilateral filtering smooths images while preserving edges,by means of a nonlinear combination of nearby image values.The method is noniterative,local,and sim-ple.It combines gray levels or colors based on both their geometric closeness and their photometric similarity,and prefers near values to distant values in both domain and range.In contrast with filters that operate on the three bands of a color image separately,a bilateral filter can en-force the perceptual metric underlying the CIE-Lab color space,and smooth colors and preserve edges in a way that is tuned to human perception.Also,in contrast with standard filtering,bilateral filtering produces no phantom colors along edges in color images,and reduces phantom colors where they appear in the original image.1IntroductionFiltering is perhaps the most fundamental operation of image processing and computer vision.In the broadest sense of the term “filtering,”the value of the filtered image at a given location is a function of the values of the in-put image in a small neighborhood of the same location.In particular,Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood,in which,the weights decrease with distance from the neighborhood cen-ter.Although formal and quantitative explanations of this weight fall-off can be given [11],the intuitionis that images typically vary slowly over space,so near pixels are likely to have similar values,and it is therefore appropriate to average them together.The noise values that corrupt these nearby pixels are mutually less correlated than the signal values,so noise is averaged away while signal is preserved.The assumption of slow spatial variations fails at edges,which are consequently blurred by low-pass filtering.Many efforts have been devoted to reducing this undesired effect [1,2,3,4,5,6,7,8,9,10,12,13,14,15,17].How canSupported by NSF grant IRI-9506064and DoD grants DAAH04-94-G-0284and DAAH04-96-1-0007,and by a gift from the Charles Lee Powell foundation.we prevent averaging across edges,while still averaging within smooth regions?Anisotropic diffusion [12,14]is a popular answer:local image variation is measured at every point,and pixel values are averaged from neighborhoods whose size and shape depend on local variation.Diffusion methods average over extended regions by solving partial differential equations,and are therefore inherently iterative.Iteration may raise issues of stability and,depending on the computational architecture,efficiency.Other approaches are reviewed in section 6.In this paper,we propose a noniterative scheme for edge preserving smoothing that is noniterative and simple.Al-though we claims no correlation with neurophysiological observations,we point out that our scheme could be imple-mented by a single layer of neuron-like devices that perform their operation once per image.Furthermore,our scheme allows explicit enforcement of any desired notion of photometric distance.This is particularly important for filtering color images.If the three bands of color images are filtered separately from one another,colors are corrupted close to image edges.In fact,different bands have different levels of contrast,and they are smoothed differently.Separate smoothing perturbs the balance of colors,and unexpected color combinations appear.Bilateral filters,on the other hand,can operate on the three bands at once,and can be told explicitly,so to speak,which colors are similar and which are not.Only perceptually similar colors are then averaged together,and the artifacts mentioned above disappear.The idea underlying bilateral filtering is to do in the range of an image what traditional filters do in its domain.Two pixels can be close to one another,that is,occupy nearby spatial location,or they can be similar to one an-other,that is,have nearby values,possibly in a perceptually meaningful fashion.Closeness refers to vicinity in the do-main,similarity to vicinity in the range.Traditional filter-ing is domain filtering,and enforces closeness by weighing pixel values with coefficients that fall off with distance.Similarly,we define range filtering,which averages imagevalues with weights that decay with dissimilarity.Range filters are nonlinear because their weights depend on image intensity or putationally,they are no more com-plex than standard nonseparablefilters.Most importantly, they preserve edges,as we show in section4.Spatial locality is still an essential notion.In fact,we show that rangefiltering by itself merely distortsan image’s color map.We then combine range and domainfiltering, and show that the combination is much more interesting. We denote teh combinedfiltering as bilateralfiltering.Since bilateralfilters assume an explicit notion of dis-tance in the domain and in the range of the image function, they can be applied to any function for which these two distances can be defined.In particular,bilateralfilters can be applied to color images just as easily as they are applied to black-and-white ones.The CIE-Lab color space[16] endows the space of colors with a perceptually meaningful measure of color similarity,in which short Euclidean dis-tances correlate strongly with human color discrimination performance[16].Thus,if we use this metric in our bilat-eralfilter,images are smoothed and edges are preserved in a way that is tuned to human performance.Only perceptually similar colors are averaged together,and only perceptually visible edges are preserved.In the following section,we formalize the notion of bilateralfiltering.Section3analyzes rangefiltering in isolation.Sections4and5show experiments for black-and-white and color images,respectively.Relations with previous work are discussed in section6,and ideas for further exploration are summarized in section7.2The IdeaA low-pass domainfilter applied to image f x produces an output image defined as follows:h x x f x(1)where x measures the geometric closeness between the neighborhood center x and a nearby point.The bold font for f and h emphasizes the fact that both input and output images may be multiband.If low-passfiltering is to preserve the dc component of low-pass signals we obtainx x(2)If thefilter is shift-invariant,x is only a function of the vector difference x,and is constant.Rangefiltering is similarly defined:h x x f f f x(3)except that now f f x measures the photometric sim-ilarity between the pixel at the neighborhood center x and that of a nearby point.Thus,the similarity function operates in the range of the image function f,while the closeness function operates in the domain of f.The nor-malization constant(2)is replaced byx f f x(4) Contrary to what occurs with the closeness function,the normalization for the similarity function depends on the image f.We say that the similarity function is unbiased if it depends only on the difference f f x.The spatial distributionof image intensities plays no role in rangefiltering taken by bining intensities from the entire image,however,makes little sense,since image values far away from x ought not to affect thefinal value at x.In addition,section3shows that rangefiltering by itself merely changes the color map of an image,and is therefore of little use.The appropriate solution is to combine domain and rangefiltering,thereby enforcing both geometric and photometric binedfiltering can be described as follows:h x x f x f f x(5) with the normalizationx x f f x(6) Combined domain and rangefiltering will be denoted as bilateralfiltering.It replaces the pixel value at x with an average of similar and nearby pixel values.In smooth regions,pixel values in a small neighborhood are similar to each other,and the normalized similarity function isclose to one.As a consequence,the bilateralfilter acts es-sentially as a standard domainfilter,and averages away the small,weakly correlated differences between pixel values caused by noise.Consider now a sharp boundary between a dark and a bright region,as infigure1(a).When the bilateralfilter is centered,say,on a pixel on the bright side of the boundary,the similarity function assumes values close to one for pixels on the same side,and close to zero for pixels on the dark side.The similarity function is shown in figure1(b)for afilter support centered two pixels to the right of the step infigure1(a).The normalization term x ensures that the weights for all the pixels add up to one.As a result,thefilter replaces the bright pixel at the center by an average of the bright pixels in its vicinity,and essentially ignores the dark pixels.Conversely,when the filter is centered on a dark pixel,the bright pixels are ig-nored instead.Thus,as shown infigure1(c),goodfiltering behavior is achieved at the boundaries,thanks to the do-main component of thefilter,and crisp edges are preserved at the same time,thanks to the range component.Figure1:(a)A100-gray-level step perturbed by Gaussian noise with gray levels.(b)Combined similarity weights x f f x for a neighborhood centered two pixels to the right of the step in(a).The range component effectively suppresses the pixels on the dark side.(c)The step in(a)after bilateralfiltering with gray levels and pixels.2.1Example:the Gaussian CaseA simple and important case of bilateralfiltering is shift-invariant Gaussianfiltering,in which both the close-ness function x and the similarity function f are Gaussian functions of the Euclidean distance between their arguments.More specifically,is radially symmetricxwherex x xis the Euclidean distance between and x.The similarity function is perfectly analogous to:xwheref f fis a suitable measure of distance between the two intensity values and f.In the scalar case,this may be simply the absolute difference of the pixel difference or,since noise increases with image intensity,an intensity-dependent ver-sion of it.A particularly interesting example for the vector case is given in section5.The geometric spread in the domain is chosen based on the desired amount of low-passfiltering.A large blurs more,that is,it combines values from more distant image locations.Also,if an image is scaled up or down, must be adjusted accordingly in order to obtain equivalent results.Similarly,the photometric spread in the image range is set to achieve the desired amount of combination of pixel values.Loosely speaking,pixels with values much closer to each other than are mixed together and values much more distant than are not.If the image is amplified or attenuated,must be adjusted accordingly in order to leave the results unchanged.Just as this form of domainfiltering is shift-invariant, the Gaussian rangefilter introduced above is insensitive to overall additive changes of image intensity,and is therefore unbiased:iffiltering f x produces h x,then the samefilter applied to f x a yields h x a,since f a f xa f a f x a f f x.Of course,the rangefilter is shift-invariant as well,as can be easily verified from expressions(3)and(4).3Range Versus Bilateral FilteringIn the previous section we combined rangefiltering with domainfiltering to produce bilateralfilters.We now show that this combination is essential.For notational simplicity,we limit our discussion to black-and-white images,but analogous results apply to multiband images as well.The main point of this section is that rangefiltering by itself merely modifies the gray map of the image it is applied to. This is a direct consequence of the fact that a rangefilterhas no notion of space.Let be the frequency distribution of gray levels in the input image.In the discrete case,is the gray level histogram:is typically an integer between and,and is the fraction of image pixels that have a gray value of.In the continuous case,is the fraction ofimage area whose gray values are between and. For notational consistency,we continue our discussion in the continuous case,as in the previous section.Simple manipulation,omitted for lack of space,shows that expressions(3)and(4)for the rangefilter can be com-bined into the following:(7) whereindependently of the position x.Equation(7)shows range filtering to be a simple transformation of gray levels.The mapping kernel is a density function,in the sense that it is nonnegative and has unit integral.It is equal to the histogram weighted by the similarity function centered at and normalized to unit area.Since isformally a density function,equation(7)represents a mean. We can therefore conclude with the following result: Rangefiltering merely transforms the gray mapof the input image.The transformed gray value isequal to the mean of the input’s histogram valuesaround the input gray level,weighted by therange similarity function centered at.It is useful to analyze the nature of this gray map trans-formation in view of our discussion of bilateralfiltering. Specifically,we want to show thatRangefiltering compresses unimodal histograms.In fact,suppose that the histogram of the input image is a single-mode curve as infigure2(a),and consider an input value of located on either side of this bell curve. Since the symmetric similarity function is centered at, on the risingflank of the histogram,the product produces a skewed density.On the left side of the bell is skewed to the right,and vice versa.Since the transformed value is the mean of this skewed density,we haveon the left side and on the right side.Thus,the flanks of the histogram are compressed together.Atfirst,the result that rangefiltering is a simple remap-ping of the gray map seems to make rangefiltering rather useless.Things are very different,however,when rangefil-tering is combined with domainfiltering to yield bilateral filtering,as shown in equations(5)and(6).In fact,consider first a domain closeness function that is constant within a window centered at x,and is zero elsewhere.Then,the bilateralfilter is simply a rangefilter applied to the window. Thefiltered image is still the result of a local remapping of the gray map,but a very interesting one,because the remapping is different at different points in the image.For instance,the solid curve infigure2(b)shows the histogram of the step image offigure1(a).This histogram is bimodal,and its two lobes are sufficiently separate to allow us to apply the compression result above to each lobe.The dashed line infigure2(b)shows the effect of bilateralfiltering on the histogram.The compression effect is obvious,and corresponds to the separate smoothing of the light and dark sides,shown infigure1(c).Similar considerations apply when the closeness function has a profile other than constant,as for instance the Gaussian profile shown in section2,which emphasizes points that are closer to the center of the window.4Experiments with Black-and-White Im-agesIn this section we analyze the performance of bilateral filters on black-and-white images.Figure5(a)and5(b)in the color plates show the potential of bilateralfiltering for the removal of texture.Some amount of gray-level quan-tization can be seen infigure5(b),but this is caused by the printing process,not by thefilter.The picture“sim-plification”illustrated byfigure5(b)can be useful for data reduction without loss of overall shape features in ap-plications such as image transmission,picture editing andmanipulation,image description for retrieval.Notice that the kitten’s whiskers,much thinner than thefilter’s win-dow,remain crisp afterfiltering.The intensity values of dark pixels are averaged together from both sides of the whisker,while the bright pixels from the whisker itself areignored because of the range component of thefilter.Con-versely,when thefilter is centered somewhere on a whisker, only whisker pixel values are averaged together.Figure3shows the effect of different values of the pa-rameters and on the resulting image.Rows corre-spond to different amounts of domainfiltering,columns to different amounts of rangefiltering.When the value of the rangefiltering constant is large(100or300)with respect to the overall range of values in the image(1through254), the range component of thefilter has little effect for small:all pixel values in any given neighborhood have about the same weight from rangefiltering,and the domainfilter acts as a standard Gaussianfilter.This effect can be seen in the last two columns offigure(3).For smaller values of the rangefilter parameter(10or30),rangefilteringdominates perceptually because it preserves edges. However,for,image detail that was removed by smaller values of reappears.This apparently paradoxical effect can be noticed in the last row offigure3,and inparticularly dramatic form for,.This image is crisper than that above it,although somewhat hazy. This is a consequence of the gray map transformation and histogram compression results discussed in section3.In fact,is a very broad Gaussian,and the bilateral filter becomes essentially a rangefilter.Since intensity values are simply remapped by a rangefilter,no loss of detail occurs.Furthermore,since a rangefilter compresses the image histogram,the output image appears to be hazy. Figure2(c)shows the histograms for the input image and for the two output images for,,and for ,.The compression effect is obvious. Bilateralfiltering with parameters pixels and intensity values is applied to the image infigure4 (a)to yield the image infigure4(b).Notice that most of thefine texture has beenfiltered away,and yet all contours are as crisp as in the original image.Figure4(c)shows a detail offigure4(a),andfigure 4(d)shows the correspondingfiltered version.The two onions have assumed a graphics-like appearance,and the fine texture has gone.However,the overall shading is preserved,because it is well within the band of the domainFigure2:(a)A unimodal image histogram(solid),and the Gaussian similarity function(dashed).Their normalized product(dotted)is skewed to the right.(b)Histogram(solid)of image intensities for the step infigure1(a)and(dashed)for thefiltered image infigure1(c).(c)Histogram of image intensities for the image infigure5(a)(solid)and for the output images with,(dashed)and with,(dotted)from figure3.filter and is almost unaffected by the rangefilter.Also,the boundaries of the onions are preserved.In terms of computational cost,the bilateralfilter is twice as expensive as a nonseparable domainfilter of the same size.The range component depends nonlinearly on the image,and is nonseparable.A simple trick that decreases computation cost considerably is to precompute all values for the similarity function.In the Gaussian case,if the image has levels,there are possible values for ,one for each possible value of the difference.5Experiments with Color ImagesFor black-and-white images,intensities between any two grey levels are still grey levels.As a consequence, when smoothing black-and-white images with a standard low-passfilter,intermediate levels of gray are produced across edges,thereby producing blurred images.With color images,an additional complication arises from the fact that between any two colors there are other,often rather dif-ferent colors.For instance,between blue and red there are various shades of pink and purple.Thus,disturbing color bands may be produced when smoothing across color edges.The smoothed image does not just look blurred, it also exhibits odd-looking,colored auras around objects. Figure6(a)in the color plates shows a detail from a picture with a red jacket against a blue sky.Even in this unblurred picture,a thin pink-purple line is visible,and is caused by a combination of lens blurring and pixel averaging.In fact,pixels along the boundary,when projected back into the scene,intersect both red jacket and blue sky,and the resulting color is the pink average of red and blue.When smoothing,this effect is emphasized,as the broad,blurred pink-purple area infigure6(b)shows.To address this difficulty,edge-preserving smoothing could be applied to the red,green,and blue components of the image separately.However,the intensity profiles across the edge in the three color bands are in general different. Separate smoothing results in an even more pronounced pink-purple band than in the original,as shown infigure6 (c).The pink-purple band,however,is not widened as it is in the standard-blurred version offigure6(b).A much better result can be obtained with bilateralfil-tering.In fact,a bilateralfilter allows combining the three color bands appropriately,and measuring photometric dis-tances between pixels in the combined space.Moreover, this combined distance can be made to correspond closely to perceived dissimilarity by using Euclidean distance in the CIE-Lab color space[16].This space is based on a large body of psychophysical data concerning color-matching experiments performed by human observers.In this space, small Euclidean distances correlate strongly with the per-ception of color discrepancy as experienced by an“average”color-normal human observer.Thus,in a sense,bilateral filtering performed in the CIE-Lab color space is the most natural type offiltering for color images:only perceptually similar colors are averaged together,and only perceptu-ally important edges are preserved.Figure6(d)shows the image resulting from bilateral smoothing of the image in figure6(a).The pink band has shrunk considerably,and no extraneous colors appear.Figure7(c)in the color plates shows the result offive iterations of bilateralfiltering of the image infigure7(a). While a single iteration produces a much cleaner image (figure7(b))than the original,and is probably sufficient for most image processing needs,multiple iterations have the effect offlattening the colors in an image considerably, but without blurring edges.The resulting image has a much smaller color map,and the effects of bilateralfiltering are easier to see when displayed on a printed page.Notice the cartoon-like appearance offigure7(c).All shadows and edges are preserved,but most of the shading is gone,and no“new”colors are introduced byfiltering.6Relations with Previous WorkThe literature on edge-preservingfiltering is vast,and we make no attempt to summarize it.An early survey canFigure3:A detail fromfigure5(a)processed with bilateralfilters with various range and domain parameter values.(a)(b)(c)(d)Figure4:A picture before(a)and after(b)bilateralfiltering.(c,d)are details from(a,b).be found in[8],quantitative comparisons in[2],and more recent results in[1].In the latter paper,the notion that neighboring pixels should be averaged only when they are similar enough to the central pixels is incorporated into the definition of the so-called“G-neighbors.”Thus,G-neighbors are in a sense an extreme case of our method,in which a pixel is either counted or it is not.Neighbors in[1] are strictly adjacent pixels,so iteration is necessary.A common technique for preserving edges during smoothing is to compute the median in thefilter’s sup-port,rather than the mean.Examples of this approach are [6,9],and an important variation[3]that uses-means instead of medians to achieve greater robustness.More related to our approach are weighting schemes that essentially average values within a sliding window,but change the weights according to local differential[4,15] or statistical[10,7]measures.Of these,the most closely related article is[10],which contains the idea of multiply-ing a geometric and a photometric term in thefilter kernel. However,that paper uses rational functions of distance as weights,with a consequent slow decay rate.This forces application of thefilter to only the immediate neighbors of every pixel,and mandates multiple iterations of thefil-ter.In contrast,our bilateralfilter uses Gaussians as a way to enforce what Overton and Weimouth call“center pixel dominance.”A single iteration drastically“cleans”an image of noise and other smallfluctuations,and pre-serves edges even when a very wide Gaussian is used for the domain component.Multiple iterations are still useful in some circumstances,as illustrated infigure7(c),but only when a cartoon-like image is desired as the output.In addition,no metrics are proposed in[10](or in any of the other papers mentioned above)for color images,and no analysis is given of the interaction between the range and the domain components.Our discussions in sections3and 5address both these issues in substantial detail.7ConclusionsIn this paper we have introduced the concept of bilateral filtering for edge-preserving smoothing.The generality of bilateralfiltering is analogous to that of traditionalfilter-ing,which we called domainfiltering in this paper.The explicit enforcement of a photometric distance in the range component of a bilateralfilter makes it possible to process color images in a perceptually appropriate fashion.The parameters used for bilateralfiltering in our illus-trative examples were to some extent arbitrary.This is however a consequence of the generality of this technique. In fact,just as the parameters of domainfilters depend on image properties and on the intended result,so do those of bilateralfilters.Given a specific application,techniques for the automatic design offilter profiles and parameter values may be possible.Also,analogously to what happens for domainfiltering, similarity metrics different from Gaussian can be defined for bilateralfiltering as well.In addition,rangefilters can be combined with different types of domainfilters,including orientedfilters.Perhaps even a new scale space can be defined in which the rangefilter parameter corresponds to scale.In such a space,detail is lost for increasing,but edges are preserved at all range scales that are below the maximum image intensity value.Although bilateralfilters are harder to analyze than domainfilters,because of their nonlinear nature,we hope that other researchers willfind them as intriguing as they are to us,and will contribute to their understanding.References[1]T.Boult,R.A.Melter,F.Skorina,and I.Stojmenovic.G-neighbors.Proc.SPIE Conf.on Vision Geometry II,96–109,1993.[2]R.T.Chin and C.L.Yeh.Quantitative evaluation of some edge-preserving noise-smoothing techniques.CVGIP,23:67–91,1983.[3]L.S.Davis and A.Rosenfeld.Noise cleaning by iterated localaveraging.IEEE Trans.,SMC-8:705–710,1978.[4]R.E.Graham.Snow-removal—a noise-stripping process for picturesignals.IRE Trans.,IT-8:129–144,1961.[5]N.Himayat and S.A.Kassam.Approximate performance analysisof edge preservingfilters.IEEE Trans.,SP-41(9):2764–77,1993.[6]T.S.Huang,G.J.Yang,and G.Y.Tang.A fast two-dimensionalmedianfiltering algorithm.IEEE Trans.,ASSP-27(1):13–18,1979.[7]J.S.Lee.Digital image enhancement and noisefiltering by use oflocal statistics.IEEE Trans.,PAMI-2(2):165–168,1980.[8]M.Nagao and T.Matsuyama.Edge preserving smoothing.CGIP,9:394–407,1979.[9]P.M.Narendra.A separable medianfilter for image noise smoothing.IEEE Trans.,PAMI-3(1):20–29,1981.[10]K.J.Overton and T.E.Weymouth.A noise reducing preprocess-ing algorithm.In Proc.IEEE Computer Science Conf.on Pattern Recognition and Image Processing,498–507,1979.[11]Athanasios Papoulis.Probability,random variables,and stochasticprocesses.McGraw-Hill,New Y ork,1991.[12]P.Perona and J.Malik.Scale-space and edge detection usinganisotropic diffusion.IEEE Trans.,PAMI-12(7):629–639,1990. [13]G.Ramponi.A rational edge-preserving smoother.In Proc.Int’lConf.on Image Processing,1:151–154,1995.[14]G.Sapiro and D.L.Ringach.Anisotropic diffusion of color images.In Proc.SPIE,2657:471–382,1996.[15] D.C.C.Wang,A.H.V agnucci,and C.C.Li.A gradient inverseweighted smoothing scheme and the evaluation of its performance.CVGIP,15:167–181,1981.[16]G.Wyszecki and W.S.Styles.Color Science:Concepts and Meth-ods,Quantitative Data and Formulae.Wiley,New Y ork,NY,1982.[17]L.Yin,R.Yang,M.Gabbouj,and Y.Neuvo.Weighted medianfilters:a tutorial.IEEE Trans.,CAS-II-43(3):155–192,1996.。
具有去除椒盐噪声能力的改进双边滤波算法
具有去除椒盐噪声能力的改进双边滤波算法周航;韩权【摘要】In allusion to the feature of bilateral filter having not the ability to remove salt-and-pepper noise,this paper proposes a adaptive template replacement algorithm,which firstly replaces the pixel value of bilateral filtering template areas and then bilateral filtering can be made for the replaced template areas to have the ability to remove the salt-and-pepper noise when equipped with the capacity of removing low-frequency noise while remaining edge with bilateral filtering.The experimental results show that improved bilateral filtering algorithm can remove salt-andpepper noise well,and compared with several classical algorithms having the ability to remove salt-and-pepper noise,the average value of PSNR calculated by the proposed algorithm has been improved more than 5.8% and the average value of SSIM has been enhanced more than 2.75% under the condition of 10%~90% of noise density.%针对双边滤波器不能有效去除椒盐噪声这一特点,本文提出了一种自适应大小的模板替换算法,通过先替换双边滤波模板区域像素值,再对替换后的模板区域做双边滤波,使其在具有双边滤波保边并去除低频噪声能力的基础上兼具去除椒盐噪声的能力.实验结果证明:本文提出的算法在双边滤波的基础上可以有效去除图像中的椒盐噪声,与几种经典去除椒盐噪声的算法相比,在噪声密度1o%~90%的情况下,峰值信噪比(PSNR)平均值提升5.8%以上,结构相似性(SSIM)平均值提升2.75%以上.【期刊名称】《北京交通大学学报》【年(卷),期】2017(041)005【总页数】9页(P43-51)【关键词】图像处理;双边滤波;模板替换;椒盐噪声【作者】周航;韩权【作者单位】北京交通大学电子信息工程学院,北京100044;北京交通大学电子信息工程学院,北京100044【正文语种】中文【中图分类】TP391在图像的采集、传输和处理过程中,图像受环境、器材及处理方法的影响,不可避免的会产生不同程度的噪声,其中高斯噪声和椒盐噪声是较为常见的图像噪声污染源.受噪声影响,往往会给后续的图像处理(如识别和分割等)带来较大的影响,因此有效去除图像中的各种噪声显得尤为必要.针对高斯噪声,高斯滤波是一种简单可行的噪声处理方法,但高斯滤波在去除高斯噪声的同时会使图像丢失边缘等细节信息,从而使图像变得模糊,降低图像的整体质量.文献[1]提出了著名的双边滤波(Bilateral filter)算法,在高斯滤波器的基础上考虑像素间灰度值的差别,在去噪的同时可以保持图像边缘的清晰.但由于双边滤波采用高斯核作为算法基础来对带有噪声的图像进行处理,因此并不具备去除椒盐噪声的能力.针对椒盐噪声,标准中值滤波(SMF)算法[2]是一种简单有效的噪声处理方法,但当椒盐噪声密度较高时,中值滤波的滤波效果很差,且中值滤波模板区域越大,图像边缘会变得越模糊.针对中值滤波的这些缺点,近年来有研究人员提出新的算法来获得更好地去噪效果,其中较为经典的算法有:文献[3]提出的自适应中值加权滤波(Adaptive Center Weighted Median Filter,ACWMF)算法.文献[4]提出的递进开关中值滤波(Progressive Switching Median Filter, PSMF)算法.文献[5]提出的基于决策(Decision-Based Algorithm,DBA)算法.文献[6]提出的模糊自适应中值(Noise Adaptive Fuzzy Switching Median Filter,NAFSMF)算法.文献[7-8]提出的改进的基于决策的不对称裁剪的中值滤波(Modified Decision-Based Unsymmetrical Trimmed Median Filter,MDBUTMF)算法.文献[9]提出的弹性中值滤波1(Elastic Median Filter1 ,EMF1)算法,弹性中值滤波2(Elastic Median Filter 2,EMF2)算法.但以上改进算法均在高密度椒盐噪声下性能快速下降,为此本文作者提出了一种先替换双边滤波器模板区域像素值,再用替换后的模板区域进行双边滤波的方法来改进双边滤波算法,从而在双边滤波基础上对高密度椒盐噪声进行有效滤波.用DBA、NAFSMF、MDBUTMF、EMF1、EMF2算法与本文改进算法的椒盐噪声去除能力进行了对比,采用了峰值信噪比(PSNR)和结构相似性(SSIM)作为图像质量客观评价标准.实验结果证明本文提出的算法在椒盐噪声的去除能力上优于5种算法.1.1 NAFSMF算法NAFSMF算法在椒盐噪声点将当前像素值和所在邻域的中值进行加权平均后再进行赋值,其邻域所用模板大小不固定,弥补了中值滤波算法限定模板大小的不足.算法设定两个阈值T1和T2,T1=10,T2=30,以3×3作为最小模板大小设置自适应模板检测模板中心点附近像素点的灰度值,直到检测到非椒盐噪声点为止,此时的模板大小记为w×w,w的最大值为7.设模板中心点像素灰度值为P(i,j),记M(i,j)=median(Pi-w,j-w,Pi,j-w,Pi+w,j-w,Pi-w,j),记D(i,j)为该点附近3×3的模板区域内其他像素灰度值与P(i,j)差值的最大值.根据D(i,j)的大小做如下判断,记F(i,j)为将模板中心像素点的灰度值P(i,j)替换为[1-F(i,j)]×P(i,j)+F(i,j)×M(i,j).NAFSMF算法极大地改善了FSMF算法的性能[10],在处理高密度椒盐噪声图像时性能优良,处理中低密度椒盐噪声图像时性能相对较差.实验表明,当椒盐噪声密度高于80%时NAFSMF算法性能快速下降但滤波效果仍然显著.1.2 MDBUTMF算法针对高密度椒盐噪声污染的图像,MDBUTMF在基于决策的非对称裁剪中值滤波器(DBUTMF)[11]的基础上增加了噪声检测机制.MDBUTMF算法固定使用3×3的模板窗口.如果模板中心像素点是椒盐噪声点,则对该点进行滤波.对椒盐噪声点进行滤波时,如果模板内所有点均为椒盐噪声点,则使用模板中所有点灰度值的均值来替换P(i,j);如果模板中有非椒盐噪声点存在,则移除椒盐噪声点后用剩余像素点灰度值的中值替换P(i,j).由于该算法采用固定大小模板滤波,利用均值代替中心像素,因此算法鲁棒性较低,对部分图像滤波无效.MDBUTMF算法在处理低密度椒盐噪声图像时性能优良,在处理中高密度椒盐噪声时性能快速下降.其主要原因在于当图像中椒盐噪声密度较高时,3×3的模板区域内所有点均为椒盐噪声点的概率变大,其灰度值的均值与中心点的真实灰度值相差较大,因此算法性能快速下降.2.1 双边滤波的算法改进双边滤波是一种以高斯核(Gaussian Kernel)为基础的非线性的滤波方法,可以同时考虑像素的空间距离和像素间灰度值的差别,从而达到保边去噪的目的.其基本公式为Gσs(‖p-q‖)Gσr(‖Ip-Iq‖)IqWp=Gσs(‖p-q‖)Gσr(‖Ip-Iq‖)式中:p、q分别为像素点的空间位置;Ip、Iq为像素的灰度值;S是以p为中点的模板区域; Gσs和Gσr分别代表空间距离和灰度值的高斯核函数;BF[I]p为替换后的模板中心点像素灰度值;Wp为权值系数.可以看出在双边滤波中采用高斯核直接对原始图像的像素空间距离和像素间灰度值进行处理,当模板区域中心点为椒盐噪声点时,Wp为零,因此双边滤波不具有去除椒盐噪声的能力.在此基础上,本文作者提出了先用自适应大小的模板替换方法来替换模板区域内椒盐噪声像素点,再用高斯核对替换后的模板区域内像素空间距离和像素间灰度值进行处理的方法来改进双边滤波算法.改进后的算法流程如图1所示,改进后的算法公式为‖‖‖‖)式中为替换后的模板区域;均为新模板中与Ip、Iq相对应像素的灰度值.改进后的算法对模板中心p点的灰度值进行了替换,因此可以有效避免高斯权值过小的问题.2.2 自适应大小的模板替换由于改进算法需要对模板区域像素进行替换如图2所示,在进行模板替换时,模板大小的选取将直接影响改进算法对椒盐噪声的滤波效果.如果选取的模板过小,则在高密度椒盐噪声下,模板区域内很有可能没有有效像素,替换后的像素值依然为无效像素,且当椒盐噪声密度大于70%时这一现象尤为明显,因此在高密度椒盐噪声的情况下去噪效果很差.如果选取的模板过大,则在低密度椒盐噪声下滤波时,距离中心像素点较远的像素值,会直接影响替换像素的灰度值大小,即距离中心像素点越远,则该点像素值与中心像素点的真实灰度值差异可能会越大,因此选取的模板过大时对低密度椒盐噪声的图片滤波效果较差.采用自适应大小的模板替换方法则可以有效避免这一问题,以固定模板大小3×3为例,对椒盐噪声密度分别为70%、80%、90%的Lena图进行模板替换操作,效果如图2所示,可以看到采用自适应模板大小的模板替换方案明显更好,且椒盐噪声密度越大差异越明显.以固定模板大小7×7为例,对椒盐噪声密度分别为10%、20%、30%的Lena图进行模板替换操作,效果如图3所示,可以看到采用自适应模板大小的模板替换方案明显更好,且椒盐噪声密度越小差异越明显.本文采用自适应大小的模板替换算法,该算法包括以下3个过程.1)检测模板中心点是否为椒盐噪声点,如果是则进行模板替换;如果不是,则不改变该点的灰度值.设待处理像素点的灰度值为P(i,j),判定公式为2)自适应调整模板大小.检测模板内是否有非椒盐噪声点,如果有,模板大小设定为3×3;如果没有,则扩大模板大小继续检测,模板最大值设定为7×7.3)对模板区域进行裁剪,删除模板内的噪声点.对剩余像素分类后替换中心像素灰度值.2.3 模板中心点的替换规则以3×3的模板大小为例对中心点的替换规则,共分为以下两种情况.1)模板区域内灰度值小于76(0.3倍255)或大于178(0.7倍255)的像素点在所有有效像素点中个数最多.灰度值小于76的像素点最多时,对灰度值小于76的像素点集合取中值,用此中值来替换模板中心点的灰度值,替换方式如图4所示,图中C1、C2、C3分别表示模板区域内灰度值小于76、灰度值大于178、灰度值介于76至178的像素点个数;当模板区域内灰度值大于178的像素点在所有有效像素点中个数最多时替换方式与之类似,对灰度值大于178的像素点集合取中值,用此中值来替换模板中心点的灰度值.2)模板区域内灰度值介于76~178之间的像素点在所有有效像素点中个数最多,或灰度值小于76、大于178、介于76~178之间的3组像素点集合内某两组的像素点个数相等且大于另一组.此时直接取灰度值介于76~178之间的像素点集合的中值,用此中值来替换模板中心点的灰度值.此外,如果模板大小扩大到7×7时区域内所有像素点仍均为椒盐噪声点,则采用模板中心点前一点的替换值来替换模板中心点,替换方法如图5所示,其中NA 表示不适用.按以上步骤遍历图像中所有椒盐噪声点后,再用双边滤波器的模板从图像右上角遍历图像中所有像素点,此时双边滤波器选中的模板区域即为模板替换后新的模板区域.实验在Matlab R2015b平台下完成,选用的所有图片大小均为512×512,tiff格式,对8张不同的图片分别添加了10%~90%的椒盐噪声并进行滤波.3.1 改进算法实验结果分析传统的双边滤波由于采用高斯核为基础,由式(2)不难看出双边滤波算法既类似于传统的高斯滤波采用局部加权平均的思想,又考虑像素点的空间邻近度关系和像素灰度值的相似性,通过两者的结合,能很好地达到保持图像边缘又有效平滑高斯噪声的目的[12].但当模板区域S的中心点p为椒盐噪声点时,S内其他点的灰度值Iq与p点灰度值Ip差异很大,从而导致高斯核Gσr(‖Ip-Iq‖)极小并趋于0,因此传统的双边滤波算法不具备去除椒盐噪声的能力.而改进算法可以有效避免高斯核趋于0的情况,因此在理论上改进算法应在具有传统双边滤波算法保边和去除高斯噪声能力的同时,可以对椒盐噪声进行有效滤波.为了验证这一性能,在图6中给出了3种算法去噪后的图像对比,对Lena图添加了均值为0,方差为0.05的高斯噪声,用改进算法对加噪后的图像进行滤波,并与双边滤波和高斯滤波进行对比,表1中给出了3种算法对受不同方差大小高斯噪声污染的Lena图去噪后PSNR和SSIM的大小.3种算法模板大小均为7×7的矩阵,双边滤波的空间参数和颜色参数分别为10、0.2,改进算法的颜色参数为由图6可以看出相比于高斯滤波,双边滤波和本文改进算法均有较好的保边能力,且保边效果一致.由表1可以看出改进算法相较于双边滤波去除高斯噪声的能力几乎相同,因此认为改进算法保留了双边滤波保边和去除高斯噪声的能力.为了验证改进算法在双边滤波保边和去除高斯噪声的能力之外还具有去除椒盐噪声的能力,对Lena添加均值为0,方差为0.05的高斯噪声,噪声密度20%的椒盐噪声,用改进算法和双边滤波分别对加噪后的图像进行滤波并进行对比.滤波结果如图7所示.由图7显然可以看出双边滤波对同时含有高斯噪声和椒盐噪声的图像滤波效果很差,而本文改进算法的滤波效果明显好于双边滤波.因此认为改进算法保留了双边滤波保边和去除高斯噪声能力的同时具有去除椒盐噪声的能力.3.2 改进算法与经典算法对比为了验证改进算法对椒盐噪声的去噪能力,实验选用8张大小为512×512的常用图像处理图片,见图8和图9.对其添加10%~90%密度的椒盐噪声,并用本文改进算法和DBA、NAFSMF、MDBUTMF、EMF1、EMF2算法对其进行滤波,给出了滤波后的PSNR和SSIM.图8和9分别给出了几种算法在噪声密度为30%、50%、70%的情况下对Lena图和Peppers图的处理效果,并给出了相应的结构相似性局部值的分布图.1)由图8和9可以看出几种算法在滤波的同时均不会对图像边缘造成过度模糊,即拥有较好的保边性能.在结构相似性局部值的分布图中,像素点的颜色越深代表滤波后的图像在这一点的灰度值与原图像差别越大,因此可以看出在30%、50%、70%的3种噪声密度下改进算法的滤波效果均好于DBA、MDBUTMF、EMF1、EMF2算法,且噪声密度越高,该算法的优势越明显,相比于NAFSMF算法,改进算法性能好于NAFSMF算法,但随着噪声密度的提高,两种算法间的差距逐渐2)在图10、11中分别给出了不同噪声密度下几种算法的PSNR和SSIM的计算值的折线图,与其他5种算法相比,本文提出的改进算法滤波效果更好,且PSNR平均值提升5.8%以上,SSIM平均值提升2.75%以上.因为使用了自适应模板大小,本文的改进算法在高椒盐噪声密度下的滤波效果较好.图12、13给出了当椒盐噪声密度为95%时这几种算法的处理结果,在图12、13中前2张图片分别为Lena 和Peppers图原图和加噪后的图像,后6张依次为DBA、NAFSMF、MDBUTMF、EMF1、EMF2、改进算法的处理结果.综合以上的PSNR和SSIM数据,可以看出在不同椒盐噪声密度下本文提出的改进算法滤波效果均远好于DBA、EMF1、EMF2算法.与MDBUTMF算法相比,改进算法在低椒盐噪声密度时滤波效果与其相近,在中高椒盐噪声密度时MDBUTMF算法性能急剧下降,改进算法滤波效果远好于MDBUTMF算法.与NAFSMF算法相比,改进算法在中低椒盐噪声密度时滤波效果远好于NAFSMF算法,在高椒盐噪声密度时NAFSMF算法性能下降较慢,但改进算法滤波效果仍好于NAFSMF算法.因此认为本文提出的改进算法综合性能好于以上5种算法,这一结果与主观视觉感受相吻合.1)传统的双边滤波具有良好的保边和去除高斯噪声的能力,但并不具有去除椒盐噪声的能力.因此本文作者提出了一种先对模板区域像素灰度值进行替换,再进行双边滤波的方法来改进传统双边滤波算法,使其具有良好的去除椒盐噪声的能力,拓宽了双边滤波的应用范围.2)在Matlab平台下的实验中,验证了本文作者提出的改进算法的滤波性能,实验共对8张图像处理领域常用的图片进行了滤波处理.实验结果表明,在噪声密度10%~90%的情况下与经典算法DBA、NAFSMF、MDBUTMF、EMF1、EMF2相比,本文提出的改进算法滤波效果更好,PSNR平均值提升5.8%以上,SSIM平均值提升2.75%以上,且对受到高密度椒盐噪声污染的图像具有更好的滤波效果.【相关文献】[1] TOMASI C, MANDUCHI R. Bilateral filtering for gray and color images[C]//International Conference on Computer Vision, 1998:839.[2] PITAS I, VENETSANOPOULOS A N. Order statistics in digital image processing[J]. Proceedings of the IEEE, 1992, 80(12):1893-1921.[3] KO S J,LEE Y H.Center weighted median filters and their applications to image enhancement[J].IEEE Transactions on Circuits and Systems,1991,38(9):984-993.[4] WANG Z, ZHANG D. Progressive switching median filter for the removal of impulse noise from highly corrupted images[J]. IEEE Transactions on Circuits & Systems II Analog & Digital Signal Processing, 1999, 46(1):78-80.[5] SRINIVASAN K S, EBENEZER D. A new fast and efficient decision-based algorithm for removal of high-density impulse noises[J]. IEEE Signal Processing Letters, 2007, 14(3):189-192.[6] TOH K K V, ISA N A M. Noise adaptive fuzzy switching median filter for salt-and-pepper noise reduction[J]. IEEE Signal Processing Letters, 2010, 17(3):281-284.[7] ESAKKIRAJAN S,VEERAKUMAR T,SUBRAMANYAM A N,et al.Removal of high density salt and pepper noise through modified decision based unsymmetric trimmed median filter[J].IEEE Signal Processing Letters,2011,18(5):287-290.[8] JOURABLOO A, FEGHAHATI A H, JAMZAD M. New algorithms for recovering highly corrupted images with impulse noise[J]. Scientia Iranica, 2012, 19(6):1738-1745.[9] ERKAN U, KILICMANB A. Two new methods for removing salt-and-pepper noise from digital images[J]. ScienceAsia, 2016, 42(1): 28-32.[10] 曾宪佑, 黄佐华. 一种新型的自适应模糊中值滤波算法[J]. 计算机工程与应用, 2014,50(17):134-136.ZENG Xianyou, HUANG Zuohua. New adaptive fuzzy median filtering algorithm[J]. Computer Engineering and Applications, 2014, 50(17):134-136. (in Chinese)[11] SRINIVASAN K S, EBENEZER D. A new fast and efficient decision-based algorithm for removal of high-density impulse noises[J]. IEEE Signal Processing Letters, 2007, 14(3):189-192.[12] 张海荣. 双边滤波去噪方法及其应用研究[D]. 合肥:合肥工业大学, 2014.ZHANG Hairong.Research on bilateral filtering denoising methods and their applications[D].Hefei:Hefei University of Technology, 2014.(in Chinese)。
贝叶斯超参数优化 多层感知器
贝叶斯超参数优化是一种用于自动调整机器学习模型超参数的优化技术。
它使用贝叶斯概率理论来估计超参数的最佳值,以优化模型的性能。
多层感知器(MLP)是一种常用的神经网络模型,由多个隐藏层组成,每个层包含多个神经元。
MLP可以用于分类、回归等多种任务。
当使用贝叶斯超参数优化来调整MLP的超参数时,通常会选择一些常见的超参数,如学习率、批量大小、迭代次数等。
贝叶斯优化器会根据这些超参数的性能,选择下一个可能的最佳值。
它通过在每个步骤中随机选择少量的超参数组合,而不是搜索每个可能的组合,来提高效率。
在实践中,贝叶斯超参数优化通常使用一种称为高斯过程回归(Gaussian Process Regression)的方法,该方法可以估计每个超参数的可能值以及它们的概率分布。
然后,根据这些信息选择下一个超参数的值,以最大化模型性能的预期改善。
使用贝叶斯超参数优化可以自动调整超参数,避免了手动调整的困难和耗时。
此外,它还可以帮助找到更好的超参数组合,从而提高模型的性能和准确性。
这对于机器学习任务的实验和开发非常重要,因为它可以帮助快速找到最佳的模型配置。
机器人无标定手眼协调方法研究
收稿日期:2019-07-02基金项目:国家科技支撑计划项目(2015BAK16B04)作者简介:盛荣(1994-),男,上海理工大学机械工程学院硕士研究生,研究方向为机器人视觉控制;孙首群(1964-),男,博士,上海理工大学机械工程学院副教授、硕士生导师,研究方向为机电装备热分析、热设计及机电热耦合、转子轴承系统动力学、现代设计(虚拟样机、系统仿真、网上合作等)。
0引言无标定手眼协调是一个具有未建模动态的非线性系统控制问题,利用图像雅可比矩阵模型描述机器人手眼映射关系是目前最为广泛的一类方法。
在机器人运动过程中,由于手眼关系的非线性,图像雅可比矩阵变化十分明显,能否在线获得有效的图像雅可比矩阵估计直接关乎机器人控制性能好坏。
钱江等[1]提出了一种基于卡尔曼滤波算法的雅可比矩阵在线辨识策略,将雅可比矩阵的在线估计转化为相应系统的状态观测问题。
该算法缺点是需在噪声方差阵Q 和观测噪声方差阵R 等模型参数已知的前提下进行;辛菁等[2]尝试对系统噪声进行实时估计,将Sage-Husa 自适应卡尔曼滤波应用于在线辨识图像雅可比矩阵,实现机器人在噪声统计特性不完全已知情况下的视觉定位。
但仿真实验表明,该模型对Q 和R 同时估计会引机器人无标定手眼协调方法研究盛荣1,孙首群1,焦玉格1,叶其含2(1.上海理工大学机械工程学院,上海200093;2.上海宝信软件股份有限公司,上海201900)摘要:无标定情况下的视觉定位是机器人视觉伺服领域的热点和难点。
提出一种基于快速自适应滤波的机器人无标定手眼协调方法。
首先根据机器人三维基坐标建立运动空间和图像特征空间的微分映射关系,在不预先标定视觉传感器与机器人参数情况下,设计一个直接估计增益矩阵K k 的自适应滤波器,在线辨识图像雅可比矩阵。
采用最小二乘法获得状态估计的初始值,在此基础上设计视觉控制器和计算运动控制量。
在MAT ⁃LAB 环境下建立机器人手眼协调控制系统仿真模型,将该方法用于观测噪声统计特性未知情况下的定位反馈,并对机器人空间螺旋运动的跟踪情况进行分析。
保边滤波器:双边滤波与引导滤波
保边滤波器:双边滤波与引导滤波保边滤波器(Edge Preserving Filter)是指在滤波过程中能够有效的保留图像中的边缘信息的⼀类特殊滤波器。
其中双边滤波器(Bilateral filter)、引导滤波器(Guided image filter)、加权最⼩⼆乘法滤波器(Weighted least square filter)为⼏种⽐较⼴为⼈知的保边滤波器。
双边滤波双边滤波很有名,使⽤⼴泛,简单的说就是⼀种同时考虑了像素空间差异与强度差异的滤波器,因此具有保持图像边缘的特性。
先看看我们熟悉的⾼斯滤波器其中W是权重,i和j是像素索引,K是归⼀化常量。
公式中可以看出,权重只和像素之间的空间距离有关系,⽆论图像的内容是什么,都有相同的滤波效果。
再来看看双边滤波器,它只是在原有⾼斯函数的基础上加了⼀项,如下其中 I 是像素的强度值,所以在强度差距⼤的地⽅(边缘),权重会减⼩,滤波效应也就变⼩。
总体⽽⾔,在像素强度变换不⼤的区域,双边滤波有类似于⾼斯滤波的效果,⽽在图像边缘等强度梯度较⼤的地⽅,可以保持梯度。
(1)中值滤波Fast Median and Bilateral FilteringWhat is GIMP’s equivalent of Photoshop’s Median filter?(2)双边滤波Bilateral Filtering for Gray and Color Images(3)导向滤波Guided Image Filtering(4)Bi-Exponential Edge-Preserving Smoother(5)选择性模糊(6)reference:。
高动态范围图像增强算法
高动态范围图像增强算法席志红;赵蓝飞【摘要】Due to display devices cannot restore the true effects of highly dynamic range images,the article puts forward an image en-hancement algorithm based on hierarchical tone mapping.Firstly the algorithm applies Retinex model and fast bilateral filtering algorithm to estimate the illumination images.Secondly it applies Gaussian mixture model and expectation maximization algorithm to split the illumination image probability into hierarchical Gaussian models mixed together.Thirdly by iteration it acquires the optimal gamma coefficients correspond-ing to each Gaussian probability hierarchies.At last the hierarchies are layered to obtain the enhanced illumination image.Experiment verifies that the algorithm realizes the compression of image dynamic range while its image enhancement effects are obvious.%针对显示设备无法还原高动态范围图像的真实效果,提出一种基于分层色阶映射的图像增强算法。
联合双边滤波器和小波阈值收缩去噪算法研究
联合双边滤波器和小波阈值收缩去噪算法研究刘尚旺;郜刘阳;王博【摘要】针对现有去噪算法去噪不彻底、噪声误判、损害图像边缘和纹理细节信息的缺点,提出一种联合双边滤波器和小波阈值收缩图像去噪算法.首先,使用双边滤波器对含有噪声图像进行分层;其次,对不同分层结果,选择不同滤波器进行去噪:高对比度层采用双边滤波器,低对比度层采用小波阈值收缩去噪方法;最后,融合高、低对比度层去噪图像,实现有效去除噪声的同时,保证图像信息完整.实验结果表明,本文算法的峰值信噪比达到40.99 dB,比非局部均值滤波、双边滤波器、小波阈值收缩和偏微分方程图像去噪算法分别提高了7.79%,3.56%,11.22%和1.91%;与此同时,还能有效保留图像边缘和纹理等细节信息.%Aiming at overcoming the shortcomings of existing denoising algorithms, such as the poor denoising capability,the noise error evaluation,and the damaging of the image edge and texture details,this paper proposes an image denoising algorithm of joint bilateral filter and wavelet threshold shrinkage. Firstly, the original noise image is divided into high -contrast and low -contrast layers by bilateral filter. Secondly, different appropriate filters are employed for different hierarchical layers. i.e., the bilateral filter and wavelet threshold shrinkage are adopted for high-contrast and low-contrastlayers,respectively. Finally,the final denoising image is obtained by integrating high-contrast with low-contrast layers' denoising images, which suppresses noises and at the same time enhances the image more efficiently. Experimental results show that peak signal to noise ratio(PSNR) of this method reaches 40.99 dB, which is higher than the ratio of non -local means filter, bilateral filter, wavelet threshold shrinkage and partial differential equation algorithms by 7.79%, 3.56%, 11.22% and 1.91%, respectively. Moreover, the proposed algorithm can not only remove the noises efficiently but also preserve the image edge and texture details very well.【期刊名称】《国土资源遥感》【年(卷),期】2018(030)002【总页数】11页(P114-124)【关键词】双边滤波器;小波阈值收缩;峰值信噪比;图像边缘;纹理【作者】刘尚旺;郜刘阳;王博【作者单位】河南师范大学计算机与信息工程学院,新乡 453007;"智慧商务与物联网技术"河南省工程实验室,新乡 453007;河南师范大学计算机与信息工程学院,新乡453007;"智慧商务与物联网技术"河南省工程实验室,新乡 453007;河南师范大学计算机与信息工程学院,新乡 453007;"智慧商务与物联网技术"河南省工程实验室,新乡 453007【正文语种】中文【中图分类】TP790 引言图像在采集、传输和存储过程中不可避免会受到各种噪声影响,从而造成图像质量下降,影响其后续处理[1]。
快速双边滤波综述
快速双边滤波综述快滤双边滤波⼀、关于双边滤波(BLF)C.Tomasi 和R.Manduchi 提出了⼀种⾮迭代的简单策略,⽤于边缘保持滤波,称之为双边滤波。
双边滤波的内在想法是:在图像的值域(range )上做传统滤波器在空域(domain )上做的⼯作。
空域滤波对空间上邻近的点进⾏加权平均,加权系数随着距离的增加⽽减少;值域滤波则是对像素值相近的点进⾏加权平均,加权系数随着值差的增⼤⽽减少。
低通空域滤波器定义如下,f(x)为输⼊图像,h(x)为输出图像:1()()()(,)d x k c d ξξξ∞∞--∞-∞=??h x f x在上式中,(,)c ξx 度量了邻域中⼼点x 与邻近点ξ的⼏何邻近度,d k 为归⼀化参数,其值与图像的内容⽆关,在相同的⼏何位置其值恒定。
同理,值域滤波可定义如下:1()()()((),())r x k s d ξξξ∞∞--∞-∞=??h x f f f x((),())s ξf f x 度量了邻域中⼼点x 与邻近点ξ像素的光度相似性,r k 为归⼀化参数,与d k 不同,r k 的值依赖于图像f 。
单单应⽤值域滤波是没有意义的,它仅仅改变了图像的颜⾊映射,因为在空间上远离领域中⼼x 的点的取值不应该对x 的最终值有所影响。
恰当的做法是结合空域与值域滤波,也即同时考虑⼏何位置与光度⼤⼩。
组合后的滤波即为双边滤波器:1()()()(,)((),())x k c s d ξξξξ∞∞--∞-∞=??h x f x f f x这种结合后的滤波器称之为双边滤波器。
它⽤与x 点空间邻近且光度相似的点的像素值平均来取代x 点上原来的像素值。
在平滑的区域,双边滤波器表现为标准的⽹域滤波器,通过平均过滤掉噪声。
如图2(a )所⽰,存在⼀条锐利的分界线将图像分为暗和明的区域两个区域,当双边滤波器被定位在明区域上的⼀个像素点时,相似函数s 对同⼀侧的像素值取近似于1的值,对暗区域的像素点取近似于0的值,如图2(b )所⽰。
一种快速Bilateral滤波器的设计与实现
一种快速Bilateral滤波器的设计与实现
边小红;陈斌;杨俊华
【期刊名称】《微计算机信息》
【年(卷),期】2008(024)027
【摘要】本文提出了一种Bilatral滤波器的快速算法.在算法的设计上,首先对空间邻近度函数建立一组动态掩模,代替原始耗时的逐点运算,时间降至原有时间的三分之二;再离散化亮度相似度函数,使整个Bilateral滤波器计算公式形成卷积形式;在算法的实现上,引入了快速傅立叶变换FFT进行加速,时间复杂度由卷积运算的
0(N2)降至O(Nlog2N),降低了一个数量级.
【总页数】3页(P213-215)
【作者】边小红;陈斌;杨俊华
【作者单位】610041,中国科学院成都计算机应用研究所,四川,成都;610041,中国科学院成都计算机应用研究所,四川,成都;610041,中国科学院成都计算机应用研究所,四川,成都
【正文语种】中文
【中图分类】TP391.41
【相关文献】
1.滤波器装置中晶闸管快速投切的设计与实现 [J], 刘洪高;马琪;高军
2.基于比特平面的快速中值滤波器设计与实现 [J], 陈文艺
3.一种LTCC滤波器的快速设计与实现 [J], 李菁;刘培文
4.基于FPGA的快速中值滤波器设计与实现 [J], 李轶博;李小兵;周娴
5.FIR数字滤波器的等波纹快速优化设计与实现 [J], 陶德元;武以立
因版权原因,仅展示原文概要,查看原文内容请购买。
一种基于逆Bilateral滤波的边缘提取方法
一种基于逆Bilateral滤波的边缘提取方法
陈亮;杨宁;杜亚勤;郭雷
【期刊名称】《哈尔滨工业大学学报》
【年(卷),期】2010(042)011
【摘要】为了增强图像中边缘的响应,进而获得具有鲁棒性的边缘提取算法,提出了一种逆Bilateral滤波的图像预处理方法,将图像中的边缘很好地显现出来,在此基础上添加策略进行边缘提取.Bilateral滤波器能够进行边缘保持,鉴于其在低频和高频的良好响应,给出新的含义,构造逆Bilateral滤波器进行边缘增强处理,然后在梯度模图像上,采用自适应阈值获得和非最大抑制策略,提取图像中的边缘.实验证明,逆向思维的应用能够提高光照不一致的条件下的边缘提取效果,以及增强对噪声的低敏感性.
【总页数】5页(P1823-1827)
【作者】陈亮;杨宁;杜亚勤;郭雷
【作者单位】西北工业大学,自动化学院,西安,710129;西北工业大学,自动化学院,西安,710129;西北工业大学,自动化学院,西安,710129;西北工业大学,自动化学院,西安,710129
【正文语种】中文
【中图分类】TP391
【相关文献】
1.基于优化设计Gabor滤波器的边缘提取方法 [J], 傅一平;李志能;袁丁
2.一种改进的滤波尺度自适应调整小波边缘提取方法 [J], 张红英;吴斌;吴亚东
3.基于Gabor滤波器虚部的CT脑血管医学图像边缘特征提取方法 [J], 雷印胜;王明时;秦然
4.一种基于中值滤波和边缘检测中模板匹配的图象滤波增强算法 [J], 周焰;罗彩明
5.基于边缘检测与扫描滤波的农机导航基准线提取方法 [J], 何洁;孟庆宽;张漫;仇瑞承;项明;杜尚丰
因版权原因,仅展示原文概要,查看原文内容请购买。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
FAST BILATERAL FILTERING BY ADAPTING BLOCK SIZE Wei Yu1,Franz Franchetti1,James C.Hoe1,Yao-Jen Chang2,Tsuhan Chen2 1Carnegie Mellon University,2Cornell UniversityABSTRACTDirect implementations of bilateralfiltering show O(r2)com-putational complexity per pixel,where r is thefilter window radius.Several lower complexity methods have been devel-oped.State-of-the-art low complexity algorithm is an O(1) bilateralfiltering,in which computational cost per pixel is nearly constant for large image size.Although the overall computational complexity does not go up with the window radius,it is linearly proportional to the number of quantiza-tion levels of bilateralfiltering computed per pixel in the al-gorithm.In this paper,we show that overall runtime depends on two factors,computing time per pixel per level and aver-age number of levels per pixel.We explain a fundamental trade-off between these two factors,which can be controlled by adjusting block size.We establish a model to estimate run time and search for the optimal block ing this model,we demonstrate an average speedup of1.2–26.0x over the pervious method for typical bilateralfiltering parameters.Index Terms—bilateralfiltering,algorithm complexity, real time1.INTRODUCTIONBilateralfiltering is a non-linearfilter introduced by Tomasi et al.in1998[1].It smoothes out an image by averaging neighborhood pixels like a Gaussianfilter,but preserves sharp edges by decreasing weights of pixels when the intensity dif-ference is large.Bilateralfiltering is useful in many image processing and vision applications such as image denoising [1,2],tone mapping[3],and stereo matching[4].Direct implementation of bilateralfiltering from defini-tion is computationally expensive.There are generally three directions to make an algorithm run faster.First,design lower complexity algorithms without degrading accuracy;second, optimize code extensively for a given algorithm;third,op-timize code on a more powerful hardware platform.In this paper,our focus is along thefirst path.Related work.The computational complexity per pixel for direct implementation is O(r2),where r is thefilter win-dow radius.Recently,several methods have been proposed to reduce the arithmetic complexity of the algorithm.Porikli et al.[5]proposed a constant time bilateralfiltering with respect tofilter size for three types of bilateralfilters.Quantization of image intensity in this method may significantly degrade the quality of thefiltering output.Also,memory footprint re-quirement is large for storing the integral histograms.Yang et al.[6]propose a O(1)bilateralfiltering which extends Du-rand and Dorsey’s piecewise-linear bilateralfiltering method [3].It can achieve O(1)complexity with arbitrary spatial and arbitrary range kernels(assuming the exact or approximated spatialfiltering is O(1)complexity),with much better accu-racy and less memory footprint than[5].In[6],they discretize the image intensities.For each quantization value,they com-pute a linear spatialfiltering,whose output is defined as Prin-ciple Bilateral Filtered Image Component(PBFIC).Thefinal bilateralfiltering output is a linear interpolation between the two closest PBFICs.For typical parameter settings(see sec-tion4),processing time of[6]on a2.67GHz CPU with2GB RAM for image of size600×800is on the order of tens of milliseconds to several seconds.Overview of proposed method.In this paper,we pro-pose an extension of[6],to further reduce the run time based on an important trade-off we found.The overall computing time depends on two factors,the computational cost per pixel per PBFIC,and the average number of PBFICs computed per pixel.O(1)cost per pixel only reflects thefirst factor.We will show there is a fundamental trade-off between these two factors,and changing the block size can control the trade-off.Block size of1×1corresponds to direct implementation, and block size up to the original image size corresponds to the implementation in[6].The optimal block size should be somewhere in between for general cases.We build a model to predict run time given afixed block size,and use this model to search for the best block size.Synopsis.In the following,we briefly review the O(1)bi-lateralfiltering proposed in[6],and explains the fundamental trade-off in Section2.Section3details a model to estimate the computing time.Section4shows experiment results and Section5concludes.2.OBSERV ATION OF A FUNDAMENTALTRADE-OFFFrom the definition of bilateralfiltering,it is a compound of linear spatialfiltering and non-linear rangefiltering.Spatial filtering kernel is usually a simple boxfiltering kernel or a Gaussian kernel,both having O(1)(approximate)algorithms.Proceedings of 2010 IEEE 17th International Conference on Image Processing September 26-29, 2010, Hong KongRangefiltering kernel is usually a Gaussian kernel that assigns lower weight to pixels with large intensity difference.The filter output of a pixel x isI B(x)=y∈N(x)f s(y,x)·f r(I(y),I(x))·I(y)f(y,x)·f(I(y),I(x))f(y,x)·f(I(y),I)256·σ·e)))256·σ,L(b·b)Fig.1.Tradeoff between T a and L a for varying r=2,4,8,16.Here L=K for each block.x-axis showslog2(b w).We only test square blocks of size b w×b w.y-axis shows T a,L a and T a·L a normalized to their maximumvalues.Optimal b w for T a·L a is circled.The above analysis is an approximation of run time.Fig.2further demonstrates the relationship between run time andblock size by measuring the real run time for varying blocksize.We use the same example image.Block size is the sameas in Fig. 1.256σr varies in{1,16}.The code of[6]ispublicly available on website.We simply modify the codesuch that bilateralfiltering is looped on each block.When256σr=1,L=K,the trend of normalized run time withrespect to b w is close to Fig.1.When256σr=16,which isa typical setting for image denoising,increase of L a is muchslower than K for small b w,but T a remains a decreasing func-tion of increasing b w .That is why we observe decreasing run time for small b wvalues.Q R U P D O L ]H G U X Q W L P HFig.2.Measured run time vs.block width,for varying r =2,4,8,16.x-axis shows log 2(b w ).y-axis shows normalized run time to the maximum values.(best view in color)3.PROPOSED MODELIn this section,we build a much more detailed model to es-timate relationship between run time and block size.This model should be accurate enough to enable us to search for optimal or near optimal block size.It should also be simple so that modeling overhead is low.The total run time of bilateral filtering for a block of size b h ×b w includes four parts.•Part 1:time to compute f r (I (y ),I l ))and f r (I (y ),I l )·I (y )in Eq.(2).f r (I (y ),I l )can be pre-computed and stored in a table.For 8-bit intensities,only 256table en-tries are needed.The computing time can be estimated as T 1=C 1(b h +2r )(b w +2r )L .•Part 2:time to compute dividend and divisor in Eq.(2).Both are box spatial filtering,which can be decom-posed into horizontal sum filter followed by a vertical sum filter.For horizontal filter,each row takes 2(b w +r −1)additions/subtractions (after computing summa-tion of neighbors for the first pixel,for consecutive pix-els,summation is computed by adding a new pixel and subtracting an old pixel from the previous summation).Total number of rows is b h +2r .For vertical filter,each column takes 2(b h +r −1)additions/subtractions.To-tal number of columns is b w .So the time is estimated as T 2=C 2((b w +r −1)(b h +2r )+(b h +r −1)b w )L .•Part 3:time to compute division in Eq.(2),estimated as T 3=C 3b h b w L .•Part 4:time to interpolate.The way we implement interpolation is that after computing the l -th level of PBFIC,we check all pixels in the block if I (x )is in [I l −1,I l ).If so,then interpolate as Eq.(3).So roughly speaking,every pixel is checked for L −1times,and only one time its final bilater filtering output is interpo-lated.The time is estimated as T 4=C 4b h b w (L −2)+C 5b h b w .All parameters C 1,C 2,C 3,C 4,and C 5are platform de-pendent,and should be calibrated for each hardware platform.We use the example image in Fig.1and varying b h ,b w to cal-ibrate those parameters.Both log 2b h and log 2b w vary from 2to log 2min(w,h ),and they can be different.For each of the four parts,we use RDTSC()function (Intel time-stamp counter)to measure elapsed time.Fitting C 1,C 2,C 3are sim-ple,e.g.C 3is average of T 3/(b h b w L ).C 4,C 5are estimated using robustfit()in Matlab.We do the experiment on a Dell XPS 720desktop,with Intel Core2Duo E67502.67GHz CPU and 2GB RAM.Esti-mated C 1=28.56,C 2=17.96,C 3=32.45,C 4=32.52,C 5=140.48.Fig.3shows for each of the four parts,the measured run time (T m )and run time predicted from themodel (T p ).Average prediction error (|Tp T −ƉƌĞĚŝĐƚĞĚ ƌƵŶƚŝŵĞ P H D V X U H G U X Q W L P HƉƌĞĚŝĐƚĞĚ ƌƵŶƚŝŵĞ ƉƌĞĚŝĐƚĞĚ ƌƵŶƚŝŵĞ ƉƌĞĚŝĐƚĞĚ ƌƵŶƚŝŵĞFig.3.Measured run time vs.predicted run time from model for part 1–4(Unit for both run time is Giga-cycles).4.EXPERIMENTAL RESULTIn this section,we show how well the model works,and how much speedup can be achieved compared to the method pro-posed in [6].The dataset includes randomly downloaded 50images from website,size ranging from 100×120to 600×800.For bilateral filtering parameters r and σr ,we test r in {2,4,8,16},256σr in {1,2,4,8,16,32},which represent typical range of these parameters.About the model,we are concerned about the following questions.•How much extra cost does this model introduce?•Is the optimal block size searched by this model matches the real optimal one?•If answer to the second question is no,then how much worse is the searched block size from the real one?For the first question,modeling time involves collecting number of quantization levels for a given block size,which needs the information of I min and I max for every block.We limit our search range to all square blocks (b w ×b w ,log 2b w varies from 0to log 2min(w,h )),and all blocks whose height is twice of width (2b w ×b w ,log 2b w varies from 0to log 2min(w,h )−1).For these block sizes,we cansort them in an increasing order,and quantization levels of a certain block size can be easily built from previous block size because current block contains two small previous blocks.The measured modeling time is low.We found for about 97%of all cases,modeling time is less than 10%of the measured run time for the optimal block size.The second and third question relates to modeling accuracy.Fig.4(a)shows pre-dicted run time from modeling (T p )vs.measured run time (T m )for all images and parameter settings we tested.Av-erage prediction error (|Tp T −DEFig.4.(a)Measured run time vs.predicted run time from model;(b)run time of the predicted optimal block size vs.run time of the true optimal block size.(Unit for all run time is Giga-cycles).Next we will show the speedup of using this model over using full image size as block size,which is exactly the O (1)bilateral filtering in [6].Fig.5shows for varying σr ,aver-age speedup over all images vs.r .In each sub figure,the line (with )shows the upper bound of speedup,which is the speedup by using the true optimal block size.The line (with ×)shows the achieved speedup by using the optimal block size predicted by the model.Here we take into ac-count the model overhead.The speedup decreases when r increases.This is consistent with Fig. 1.When r is large,T a becomes the dominating factor in run time,and encour-ages large block size.For large block size,both T a and L a changes slowly with respect to the block size,so the observed speedup is small.The speedup also decreases when σr in-creases,especially for small r .When r is small,L a becomes the dominating factor.However the difference of L a for small and large block sizes becomes smaller when σr gets larger.For example,L a =1for 1×1block,but L a ≈256256σfoV S H H G X SV S H H G X SUORJ UFig.5.Average speedup vs.r over the O (1)bilateral filter-ing in [6]for varying 256σr .The line (with )shows upper bound of speedup by using the true optimal block size,and the line (with ×)shows real speedup of the proposed model.5.CONCLUSIONIn this paper,we show a fundamental trade-off between two factors in the O (1)bilateral filtering method in[6].The two factors are the computing time per pixel per quantization level and the average number of quantization levels per pixel.The trade-off can be controlled by varying block size.We build a timing model to estimate run time for a given block size.Experiments show the model gives reasonably accurate esti-mation of the run time with negligible overhead.More im-portantly,run time of the optimal block size predicted by the model is very close to the run time of the true optimal block size for most cases.Experiments also demonstrates an aver-age speedup of 1.2–26.0x on all images for typical parameter settings.We expect to see even more signi ficant speedup for HDR (high dynamic range)images because intensity ranges of HDR images are usually much larger than 256,which is a typical range of 8-bit digital images.6.REFERENCES[1]Tomasi C.and Manduchi R.,“Bilateral filtering for gray andcolor images,”in ICCV ,1998.[2]A.Buades,B.Coll,and Morel J.M.,“A review of image denois-ing algorithms,with a new one,”in Multiscale Modeling and Simulation ,2005.[3]F.Durand and J.Dorsey,“Fast bilateral filtering for the displayof high-dynamic-range images,”in Proc.of SIGGRAPH ,2002.[4]K.J.Yoon and I.S.Kweon,“Adaptive support-weight approachfor correspondence search,”in IEEE Trans on PAMI ,2006.[5]Fatih Porikli,“Constant time o (1)bilateral filtering,”in Proc.of CVPR ,2008.[6]Qingxiong Yang,Kar-Han Tan,and N.Ahuja,“Real-time o (1)bilateral filtering,”in Proc.of CVPR ,2009.。