一种提前终止CU划分和模式选择的HEVC快速算法

合集下载

HEVC运动估计快速算法优化及硬件实现

HEVC运动估计快速算法优化及硬件实现

1 引言高效视频编码(HEVC)作为新一代标准,沿用了上一代标准H.264/ AVC的编码框架[1]。

但是针对不同部分,H.265/HEVC都分别提出了新技术。

比如H.264/AVC是以宏块作为编码单元,而在H.265/HEVC中采用了编码树单元(CTU),其往下划分的结构有编码单元( CU)、预测单元(PU)和变换单元(TU)[2]。

CU的尺寸不再局限于16×16,而可以根据深度划分,从64×64分割到8×8大小。

而在实际编码过程中,计算量最大的是帧间预测部分,这是由于视频图像时间冗余大于空间冗余的特性所造成的。

在模式选择的整个迭代过程中,以运动估计为例,不同尺寸的PU块都需要经过搜索、插值来找到各自最佳的匹配块。

这是一个繁琐且非常耗时的过程。

目前已有很多学者对帧间预测模式选择提出了一些简化算法,文献[3]提出了一种扩展和迭代搜索(S&IS)以及低密度和迭代搜索(LD&IS)的运动估计算法,这种方式因为在遍历每个PU时具有规则的周期数,所以在硬件电路的设计上更加友好,但是运动估计所消耗的周期数受视频序列特点影响大,如果周期数较为固定,可能会对编码性能造成一定影响。

文献[4]提出了用基于运动矢量相似性的运动估计快速终止算法。

利用率宏块内子块运动矢量一致的特点,通过计算排除掉不可能的划分方式,从而达到提前终止运动估计的目的。

文献[5]也是采用基于划分深度的先粗选再细选的搜索方式,但是采用的搜索步长、下采样比例以及搜索方式均和本文不同,且最终结果对编码性能造成比特率增加较大。

文献[6]采用了一种快速中心搜索算法,通过发现一帧图像中,静止宏块、慢速运动宏块和快速运动宏块的比例关系,改变全搜索的遍历顺序,并设置提前终止规则,达到快速中心搜索的目的。

文献[7]采用了一种易于硬件实现的整像素运动估计搜索方式,使用并行聚类树搜索方式同时处理各个PU,并在搜索之后将相同MV候选的PU汇聚成一组,下一次的搜索将以这些组为单位进行统一搜索。

基于提前终止策略改进的运动估计算法

基于提前终止策略改进的运动估计算法

第 22卷第 7期2023年 7月Vol.22 No.7Jul.2023软件导刊Software Guide基于提前终止策略改进的运动估计算法朱鑫磊,汪伟(上海理工大学光电信息与计算机工程学院,上海 200093)摘要:针对HM-16.14中TZSearch标准算法存在的计算复杂度高、耗时相对较长等问题,提出一种基于提前终止策略的改进TZSearch算法。

首先,根据编码产生的率失真代价对编码单元、变换单元和预测单元的深度进行划分,有效避免了额外的划分深度;然后,在TZSearch初始网格搜索过程中,采用钻石搜索和六边形搜索两种搜索方式,根据运动矢量分布位置选择一种更为有效的方式,精确找出最佳匹配点;最后,使用OARP栅格搜索和精细搜索完成运动估计。

由实验结果可知,该方法与标准算法相比,平均降低了60%以上的TZSearch运动估计耗时,且基本不影响视频质量。

关键词:TZSearch算法;提前终止策略;栅格搜索;精细搜索;运动估计DOI:10.11907/rjdk.221887开放科学(资源服务)标识码(OSID):中图分类号:TP391.1 文献标识码:A文章编号:1672-7800(2023)007-0051-08A Modified Motion Estimation Algorithm Based on Early Termination StrategyZHU Xinlei, WANG Wei(School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology,Shanghai 200093, China)Abstract:Considering the high computational complexity and relatively long time consumption of the TZSearch standard algorithm within HM-16.14, an improved TZSearch algorithm based on early termination strategy is proposed to improve the efficiency of video coding. Firstly,the depth sorting of the coding unit, transform unit and prediction unit is calculated according to the performance of rate distortion, which can effectively decrease additional division depths. Secondly, two search methods, i.e. diamond search and hexagonal search, are employed within the initial grid search step of TZSearch in order to precisely find the best matching point according to the motion vector distribution. Finally,OARP raster search and fine search are used to acquire the motion estimation results. Compared with the standard algorithm, experimental re‐sults show that the proposed method reduces more than 60% motion estimation time consumption on average, yet keeps the similar video quali‐ty .Key Words:TZSearch algorithm; early termination strategy; raster search; fine search; motion estimation0 引言随着视频技术的快速发展,依靠视频传递信息变得越来越普及,这使得视频流数据在互联网传输中的占比越来越大。

一种快速HEVC编码单元决策算法

一种快速HEVC编码单元决策算法

一种快速HEVC编码单元决策算法雷海军;杨忠旺;陈骁;袁梅冷【摘要】分析高效视频编码标准(HEVC)的编码单元算法,针对当前视频编码标准计算复杂度大的问题,基于相邻编码单元相关性和纹理特性,提出一种快速HEVC 编码单元决策算法。

该算法统计当前编码单元和相邻编码单元的相关性,分析编码单元的纹理复杂度,并设定合理的阈值,决定检测是否提前终止,以此快速找到最优编码单元。

仿真结果表明,该算法与HEVC参考软件HM8.0相比,在码率增加忽略不计的情况下,编码时间平均缩短了37.4%,最高可达48.2%。

%This paper analyzes coding unit algorithm of High Efficient Video Coding(HEVC) standard. Aiming at its high computational complexity, this paper proposes a fast coding unit decision algorithm for HEVC based on the coding unit correlation and texture of coding unit. This algorithm counts the correlation of current coding unit and adjacent coding unit, calculates texture complexity of encoding units, sets a reasonable threshold, and detects early termination, to quickly find the optimal coding unit. Simulation results show that, compared with the HEVC conference software HM8.0, the proposed algorithm can reduce about 37.4% encoding time and up to 48.2%, while it suffers from negligible on bit-rates performance.【期刊名称】《计算机工程》【年(卷),期】2014(000)003【总页数】4页(P270-273)【关键词】高效视频编码;编码单元;计算复杂度;提前终止;纹理特征;码率【作者】雷海军;杨忠旺;陈骁;袁梅冷【作者单位】深圳大学计算机与软件学院,广东深圳 518060;深圳大学计算机与软件学院,广东深圳 518060;创维集团深圳研究院,广东深圳 518060;深圳职业技术学院,广东深圳 518060【正文语种】中文【中图分类】TP37随着数字技术的快速发展以及互联网的广泛应用,图像和视频压缩技术已经渗透到人们的日常生活中,而且近几年对高清和超高清的需求服务越来越大。

HEVC快速帧内模式和深度决策算法

HEVC快速帧内模式和深度决策算法

HEVC快速帧内模式和深度决策算法伍冠健;宋立锋【摘要】A fast algorithm based on interlaced extracting modes,statistics of pixel gradient and residual of sub-CU for intra prediction in High Efficiency Video Coding (HEVC) is presented in this paper.Cor-responding to 33 intra angle modes in HEVC,gradient directions are divided into 33 classes.And gradi-ent direction of every pixel in a CU is calculated.Angle modes with even number are calculated and ranked firstly,then the set of candidate mode is gained by fast comparison.According to the cumulative number of pixels in every class of gradient direction,some candidate modes can be rejected.When cal-culating Hadamard Transform predicted residual ( SATD) of current PU,SATDs of the four sub-PUs are noted.According the relative ratio of them,the traversal calculation of next depth of current PU is rejec-ted.Experimental results show that,compared with HM13.0,the proposed method performs about 54%time saving of intra encoding with only1%increment on the total rate.%针对HEVC帧内预测过程计算复杂度较大的问题,提出基于隔点模式抽取、像素梯度统计和子PU残差相对比的快速帧内预测算法.对应HEVC的33种帧内角度模式,按区间划分33类梯度方向并计算PU各个像素的梯度方向.先对偶数编号的角度模式计算排序,再快速比较得到候选模式集.然后根据所属各类梯度方向的像素累计个数,舍弃部分候选模式.在计算当前PU的哈达玛变换预测残差( Sum of Absolute Transformed Difference,SATD)的同时,记录该PU内4个子PU的SATD,并通过对这4个SATD之间的相对比,跳过当前PU之后深度的计算.实验结果表明,与HEVC标准测试模型HM13.0的算法相比,本文所提出的算法可节省约54%的帧内编码时间,而码率只有约1%的增加.【期刊名称】《广东工业大学学报》【年(卷),期】2015(032)004【总页数】6页(P132-137)【关键词】HEVC;帧内预测;隔点模式抽取;像素梯度;子PU残差;快速算法【作者】伍冠健;宋立锋【作者单位】广东工业大学信息工程学院,广东广州510006;广东工业大学信息工程学院,广东广州510006【正文语种】中文【中图分类】TN919.81新一代视频压缩编码标准高效视频编码(High Efficiency Video Coding,HEVC)[1]由 MPEG 与 VCEG组成的视频编码联合专家组(JCT-VC)制定,最初于2010年1月启动,并于2013年1月正式完成发布[2].HEVC的目标在于大幅提高视频编码效率,在相同图像质量的前提下,压缩率比H.264/AVC高档次(High Profile)提高一倍,支持从320×240到7 980×4 320 各种分辨率的视频[3].同时,相比H.264的传输特点[4],HEVC 的码流结构更适合传输速度越来越快的网络环境[5].为此,HEVC在编码流程的各个环节都做了大量的改进.其中在帧内预测方面,预测模式由8种扩展为36种;编码块尺寸由4×4和8×8两种扩展为由4×4到64×64共5种尺寸.但多种模式和多层深度的遍历计算,在提高了预测精度的同时,也大幅增加了运算复杂度[6].因此,对HEVC的应用推广来说,快速帧内预测算法的提出是必然需求.现阶段HEVC的快速帧内预测算法可分为两类.第一类是减少遍历计算的模式个数,尤其是进入率失真优化(Rate-Distortion Optimization,RDO)阶段的候选模式个数.如文献[7-8]提出的根据像素梯度统计信息,减少进入粗模式选择(Rough Mode Decision,RMD)和RDO的候选模式个数.文献[7]的算法在HEVC标准测试模型HM4.0的测试中,编码时间减少了20%,码率增加了0.74%.此外,文献[9]利用组合比较的方法,减少了RMD遍历的模式个数.文献[10-11]利用分组合并的方法快速得到编码单元(Codiny Unit,CU)的最佳模式.第二类是减少遍历计算的层次深度.如文献[11]提出了基于CU当前深度的总代价与当前量化参数(Quantization Parameter,QP)的比较,跳过之后的深度遍历,该算法在HM5.2rc1的测试中,编码时间减少了24%,码率增加了0.83%.此外,文献[12]提出了通过 CU当前深度与上一深度的率失真代价(Rate-Distortion Cost,RD Cost)的比较跳过当前以及之后的深度遍历的算法;文献[13-14]提出了根据区域图像特征快速选定CU划分深度的算法.以上算法虽然都在一定程度上减少了编码时间,但减少幅度还不大.本文提出的算法能在码率增幅极小的前提下,大幅减少帧内预测编码时间,更有利于HEVC的应用推广.1HEVC帧内编码1.1 帧内编码块和预测模式HEVC划定了如图1所示的四叉树结构的编码单元(Coding Unit,CU),以此作为单位进行编码.先规定最大的CU(Largest CU,LCU),最大为64×64,往下可逐层划分,最小尺寸为8×8.每一层成为一个深度,并规定LCU的深度为0,往下递增.在帧内编码中,LCU之间按光栅扫描顺序逐个编码,而在LCU内部则按Z扫描顺序对各个深度的CU进行遍历预测.在非最大深度CU中,预测单元(PredictionUnit,PU)尺寸都与CU相同,即按2N×2N方式划分.当完成最大深度CU的预测后,则要在最大深度CU的基础上再往下划分一层深度PU进行遍历预测,即按N×N方式划分.如图2所示,HEVC的帧内预测模式有36种,其中包括33种角度预测模式(编号为2到33),2种非角度预测模式(编号为0的Planar和编号为1的DC模式),1种色度预测特有的亮度导出模式(编号为36的DM,编号为35的LM预测模式已被弃用,但仍保留其编号)[15].图1 基于四叉树的CU划分Fig.1 The partition of coding unit based on quad-tree图2 33种帧内角度预测模式Fig.2 33 kinds of angular intra prediction modes 1.2 帧内预测流程自2013年JCT-VC正式发布了HEVC后,其标准测试软件HM趋向稳定.根据HM13.0,HEVC的帧内亮度预测流程如图3所示,可分为4个环节.首先进行RMD,对PU遍历35种模式预测.每次对预测块与原始块的残差使用哈达玛变换,得到其SATD值.定义每个模式RMD预测的总代价HC为其中SATDpre是预测块与原始块的SATD值,λ是拉格朗日乘子,R是使用该模式编码的比特率.在这35种模式中,根据PU的尺寸选择其HC最小的N个模式作为候选模式集MC.若PU尺寸大于或等于16×16,则N=3;否则N=8.其次,进行最有可能模式选择(Most Probable Mode,MPM),即根据当前PU的左、上边已编码的邻近PU的模式,添加1或2种模式进候选模式集MC,并更新N的值.然后对这N种候选模式进行第一次率失真优化(RDO1),定义其率失真代价RC 为其中SSDrec(Sum of Squared Difference)是使用该模式编码的重建块与原始块的残差.RC最小的模式即为帧内预测的最佳模式.最后对该最佳模式进行第二次率失真优化(RDO2),该次RDO使用了TU划分,根据两者RC的比较,以确定PU 是作为一个TU,还是划分为多个TU进行编码.图3 HM13.0单层深度帧内预测流程Fig.3 The process of intra prediction in one depth of HM13.02 HEVC快速帧内预测算法2.1 减少遍历计算的模式个数HM标准帧内预测算法中,在RMD环节要计算35个模式,在两次RDO环节共要计算N+1个模式,其计算量过于庞大.由于RDO使用的是重建块,每计算一次RDO,就相当于对PU做了一次完整的编码流程,计算量更是远大于RMD.因此,本文提出如图4所示的一系列快速算法,以减少进入RMD和RDO环节的模式个数.图4 修改后的单层深度帧内预测流程Fig.4 The modified process of intra prediction in one depth2.1.1 减少RMD计算的模式个数RMD的快速算法分4步,每计算一个模式则排序更新一次RMD候选模式集MC.首先计算17个编号为偶数的角度模式,并根据其HC值,由小到大排序,选取前N个模式组成RMD候选模式集MC.其次,检测当前PU是否存在左、上、左上、右上邻近PU,以及其所属上一深度的PU.若这些PU存在并已完成预测,则对当前PU计算它们所采用的模式.然后,比较MC内前两个模式M1、M2的HC值,若满足式(3),则分别计算这两个模式的两个未被检测过的邻近编号模式,否则只计算M1的两个邻近编号模式.α的值本文取1.2.例如若M1=4,则要检测编号为3与5的两个模式;若M1=9,则要检测编号为7与11的两个模式.最后计算编号为0和1的两个非角度模式.经此修改,RMD实际遍历计算的模式个数通常只有21到23个,仅占全部模式个数的60%.该部分算法全流程如图5所示.此算法的思路在于舍弃原本全搜索得到最佳候选集的方式,改为由疏到密的隔点抽样,逐步逼近最佳的候选模式.其中对邻近PU以及上层PU模式的检测,是基于它们与当前PU之间存在一定的空域相似性,有较大可能选取相同或相近的最佳模式.图5 修改后的RMD流程Fig.5 The modified process of rough mode decision 2.1.2 提前确定最佳模式在进入第一次RDO之前,检测当前PU的左、上邻近PU以及所属上一深度的PU是否都存在,并且它们所选的最佳模式是否都与当前PU的RMD最佳候选模式相同.若相同,再检测左上、右上邻近PU.只要这两者中存在一个,并且其所选的最佳模式与当前PU的RMD最佳候选模式相同,则确定此最佳候选模式为最佳模式,不再对其他候选模式进行RDO计算;否则进行像素梯度检测,再次筛选候选模式集MC.2.1.3 像素梯度检测每进入一个新的LCU,则对该LCU内每个像素计算其梯度方向.定义像素Px,y的右边和下边邻近像素分别为 Px+1,y和 Px,y+1.若 Px,y是 LCU 的右或下边界像素,则令其右或下邻近像素与其相等.像素梯度方向为定义与Angle相对应的角度模式编号如表1所示.在该LCU内的各个深度的PU,都可统计该PU内采用某个模式的像素个数.对候选模式集MC第3个及之后的候选模式检测,定义当前PU中采用编号为i(2≤i≤33)的角度候选模式的像素个数为ni,当前PU的宽度为W,高度为H.若满足式(5)[8],则可认为在全部33个角度模式中,模式i成为最佳模式的概率远低于平均水平,可以在MC中舍弃模式i. 表1 像素梯度方向与帧内角度模式编号的对应关系Tab.1 The correspondencebetween gradient directions of a pixel and numbers of angular intra prediction modes?若当前PU尺寸大于或等于16×16,则对所有角度模式按其所包含的像素个数由多到少进行排序,选取前N个模式作为比较集MC0与MC中第3个以及之后的角度候选模式逐个对照.若某个候选模式不在MC0之中,则舍弃该候选模式.由于尺寸为8×8和4×4的PU所含的像素个数较少,各ni相差不明显,该排序比较算法的准确性会因而下降,故不适用于它们.2.1.4 子PU残差比较一个PU的SATD,是对其划分为多个8×8或4×4块分别计算得到的SATD的总和.因此,在计算某个非最大深度PU的SATD时,可记录它所包含的4个子PU各自的SATD,记为subSATDi.并定义这4个子PU残差值之间的总相对误差率为subR.PU的帧内预测都是基于左、上邻近PU的像素得到的,因此在PU内部,越靠右、下方向的像素,其预测误差越大,这种情况在PU包含丰富的纹理信息时愈加明显.从式(6)~(7)可知,对某个非最大深度PU来说,可根据其subR的大小体现它包含的纹理信息的丰富程度,并由此推测PU是否需要TU划分乃至下一深度的PU划分.从实验数据可知,4×4的PU必定不需TU划分;8×8的PU有极大几率不需TU划分,因此可以跳过RDO2;尺寸在16×16及以上的PU是否需TU划分的随机性较大,不应跳过RDO2.本文提出的算法是:4×4、8×8、64×64的 PU 跳过 RDO2;尺寸在16×16及以上的PU最佳模式若满足式(8)则跳过RDO2.β 的值本文取0.3.2.2 减少遍历计算的层次深度若当前PU是深度为d(d>0)的第k(1≤ k≤4)个PU,在完成该PU的帧内预测后,可检测其上一深度的subRd-1值,若不满足式(9),则按照原本的预测流程继续;否则可进一步比较率失真代价.定义上一深度的PU的率失真代价为RCd-1,其4个子PU采用最佳模式下的 STAD值分别为subSATDd-1,i,当前深度k个PU的率失真代价分别为 RCd,i.若满足式(12)[12],则可认为当前 CU 的图像比较平缓,不需更大深度的划分,即跳过之后的遍历计算流程,提前确定d-1为当前LCU的最佳划分深度.γ的值本文取0.1.3 实验结果与分析3.1 实验条件本文所提出的算法已在HEVC标准测试模型HM13.0实现,编码采用HM13.0附带的All Intra和Random Access的Main Profile标准配置文件.对27个HEVC通用测试序列各以22、27、32、37共4个QP进行编码.实验环境是:主频3.3 GHz的Intel Core i3-2120 CPU,4Gbyte内存的 Windows 7(64位)系统,代码编译工具使用VS2010.3.2 实验结果分析本文所提出的算法与原HM13.0之间的性能比较由Bjontegaard[16]的算法体现.BDRate是经由 RD曲线拟合得到的,在相同亮度PSNR下,本文算法相比HM13.0所增加的比特率百分比.同理,BDPSNR是在相同比特率下,本文算法与HM13.0的亮度PSNR差值.ATSP(Average Time Saving Percent)是本文算法相比HM13.0所减少的编码时间百分比.测试结果如表2所示.在All Intra配置下,从Class A到Class E皆是由摄像机拍摄的常规视频序列,它们的峰值信噪比平均下降0.045dB,比特率增加0.858%.Class F为特殊用途视频序列,峰值信噪比平均下降0.270dB,比特率增加2.346%.全部视频序列的峰值信噪比总体下降0.079dB,比特率增加1.079%,编码时间减少53.67%.在 Random Access配置下,从Class A到Class E视频序列的峰值信噪比平均下降0.026dB,比特率增加0.795%.Class F的峰值信噪比平均下降0.182dB,比特率增加1.832%.全部视频序列的峰值信噪比总体下降0.049dB,比特率增加 0.949%,编码时间减少12.97%.由此可见,本文所提出的算法能大幅减少帧内编码时间,而所造成的总体图像质量下降和码率增加非常轻微. 表2 27个HEVC通用测试序列的实验数据Tab.2 The experimental data of 27 universal test sequences of HEVC?4 结语本文介绍了新一代视频编码标准HEVC的帧内预测技术,以及一些前人关于HEVC快速帧内预测算法的研究成果,并提出了新的快速帧内预测算法.该算法以像素梯度统计和子PU残差比较作为参考要素,在帧内预测时,舍弃了某些模式和深度的遍历计算.在与HM13.0的实验结果比较中,本文所提出的算法大幅减少了帧内编码时间,而因此只有轻微的图像质量下降和比特率增加.参考文献:[1]Bross B,Han W J,Sullivan G J,et al.High efficiency videocoding(HEVC)text specification draft 9[S].document JCTVC K1003,ITU-T/ISO/IEC Joint Collaborative Team on Video Coding(JCT-VC),2012. [2]Sullivan G J,Ohm J,Han W,et al.Overview of the high efficiency video coding(HEVC)standard[J].Circuits and Systems for Video Technology,IEEE Transactions,2012,22(12):1649-1668.[3]朱秀昌,李欣,陈杰.新一代视频编码标准——HEVC[J].南京邮电大学学报:自然科学版,2013,33(3):1-11.Zhu X C,Li X,Chen J.Next generation video coding standard —— HEVC[J].Journal of Nanjing University of Posts and Telecommunications:Natural Science Edition,2013,33(3):1-11. [4]刘国英,章云,陈泓屺.H.264视频码流自适应传输的研究与实现[J].广东工业大学学报,2013,30(4):83-87.Liu G Y,Zhang Y,Chen H Q.Researchon and implementation of adaptive transmission of H.264 video stream [J].Journal of Guangdong University of Technology ,2013,30(4):83-87. [5]Rickard S,Ying C,Akira F,et al.Overview of HEVC high-level syntax and reference picture management[J].Circuits and Systems for Video Technology,IEEE Transactions,2012,22(12):1858-1870.[6]Guilherme C,Pedro A,Luciano A,et al.Performance and computational complexity assessment of high-efficiency video encoders [J].Circuits and Systems for Video Technology,IEEE Transactions,2012,22(12):1899-1909.[7]Jiang W,Ma H,Chen Y.Gradient based fast mode decision algorithm for intra prediction in HEVC[C]∥ Consumer Electronics,Communications and Networks(CECNet),2012 2nd International Conference.Yichang:IEEE,2012:1836-1840.[8]Chen G,Pei Z,Sun L,et al.Fast intra prediction for HEVC based on pixel gradient statistics and mode refinement[C]∥ Signal and Information Processing(ChinaSIP),2013 IEEE China Summit&International.Beijing:IEEE,2013:514-517.[9]石飞宇,刘昱.一种HEVC快速帧内模式判断算法[J].电视技术,2013,37(11):8-11.Shi F Y,Liu Y.Fast intra mode decision algorithm for HEVC [J].Digital Video,37(11):8-11.[10]Yan S,Hong L,He W,etal.Group-Based Fast Mode Decision Algorithm for Intra Prediction in HEVC[C]∥Signal Image Technology and Internet Based Systems(SITIS),2012 Eighth International.Naples:[s.n],2012:225-229.[11]Kim J,Choe Y,Kim Y.Fast coding unit size decision algorithm for intra coding in HEVC[C]∥Consumer Electronics(ICCE),2013 IEEE International Conference.NV Las Vegas:IEEE,2013:637-638.[12]Zhang H,Ma Z.Early termination schemes for fast intra mode decision in high efficiency video coding[C]∥ Circuits and Systems(ISCAS),2013 IEEE International Symposium.Beijing:IEEE,2013:45-48.[13]蒋洁,郭宝龙,莫玮,等.利用平滑区域检测的HEVC帧内编码快速算法[J].西安电子科技大学学报:自然科学版,2013,40(3):194-200.Jiang J,GuoB L,Mo W,et al.Fast intra coding algorithm using smooth region detection for HEVC[J].Journal of Xidian University:Natural Science Edition,2013,40(3):194-200.[14]甘勇,赵晓荣,李天豹,等.基于图像特征的HEVC快速帧内预测算法[J].郑州轻工业学院学报:自然科学版,2014,29(1):90-93.Gan Y,Zhao X R,Li T B,et al.Fast intra prediction algorithm based on picture feature for HEVC [J].Journal of Zhengzhou University of Light Industry:Natural Science Edition,2014,29(1):90-93.[15]Lainema J,Bossen F,Han W,et al.Intra coding of the HEVC standard[J].Circuits and Systems for Video Technology,IEEETransactions,2012,22(12):1792-1801.[16]Bjontegaard G.Calculation of average PSNR differences betweenRD-curves(VCEG-M33)[C]∥ Proc.VCEG A Texas Austin:[s.n],2001.。

一种HEVC快速编码单元划分方法

一种HEVC快速编码单元划分方法

C U( 1 a r g e s t C o d i n g U n i t ) 6 4 x 6 4的 率 失真 值 。 然 后 计 算 四 个 H. 2 6 4 , AV C 标 准 是 由 国 际 电信 联 盟 ( I T U- T. I n t e ma t i o n a I 元 L 子 C U 的 率 失 真值 之 和 , 通过 比较 两 者 的值 决 定 是 否 划 分 如 Te l e c ommuni c a t i o n Un i o n - Te l e c o m mun i c a t i o n St a nd a r di z a t i o n 果 当前 C U 的 率 失真 值 较 小 . 则 不 划 分 , 反 之 则 划 分 为 四 个 子 S e c t o r ) 的 视频 编 码 专 家组 ( VCE G, Vi d e o C o d i n g E x p e r t Gr o u p ) 和 CU, 继 续 重 复上 述过 程 直 至深 度 为 4 针 对 CU 的递 归划 分 过 国 际标准 化组 织/ 国际 电工委 员会 ( I S 1 3 / I E C. I n t e ma t i o n a l O r g a n i
z a t i o n f 【 _ J r S t a n d a r d i z a t i o r d l n t e r n a t i o n a l E l e ( : t r ( ' f e ( h n i c a l C o u u n i s —
程 作 出 了 优 化 . 本 文 将 采 用 一 种 新 颖 的 方 法 针 对 此 过 程 进 行
时, 首 先 判 断 周 围 已编 码 C T U深度是否 为 O , 若存在 . 则 不进 行遍历划 分 ., 而是 计 算 3前 " - C T U 与 周 围 已 编 码 深 度 为 0的

HEVC快速帧间CU分割算法

HEVC快速帧间CU分割算法

CorrespondencesA Fast HEVC Inter CU Selection MethodBased on Pyramid Motion DivergenceJian Xiong,Hongliang Li,Senior Member,IEEE,Qingbo Wu,andFanman MengAbstract—The newly developed HEVC video coding standard can achieve higher compression performance than the previous video coding standards,such as MPEG-4,H.263and H.264/A VC.However,HEVC’s high computational complexity raises concerns about the computational burden on real-time application.In this paper,a fast pyramid motion divergence(PMD)based CU selection algorithm is presented for HEVC inter prediction.The PMD features are calculated with estimated optical flow of the downsampled frames.Theoretical analysis shows that PMD can be used to help selecting CU size.A nearest neighboring like method is used to determine the CU splittings.Experimental results show that the fast inter prediction method speeds up the inter coding significantly with negligible loss of the peak signal-to-noise ratio.Index Terms—HEVC,inter prediction,motion divergence,H.264,video coding,CU decision.I.I NTRODUCTIONThe state-of-the-art video coding standard H.264/A VC[1]provides superior coding performance for the standard definition video contents. However,the ever increasing popularity of high definition(HD)video applications[2],[3]are creating stronger demand of video compression technologies to provide higher coding efficiency[4].High Efficiency Video Coding(HEVC)is the newly developed video coding standard. HEVC is on the block-based hybrid coding framework.Novel coding tools were proposed to better compress the HD video contents. An advantage of HEVC is that,it provides the use of variable sizes (from6464to44)of coding,prediction and transform blocks.For applications targeting at high performance video coding,to achieve the best coding performance,all sizes of the coding units(CUs)in the quad-tree are checked in a recursive manner to decide the optimum mode.However,the recursive processes adopt the Rate-Distortion Optimization(RDO)techniques which may result in extremely high computational complexities.Therefore,any attempt to reduce the huge computational time without degrading the coding performance Manuscript received December14,2012;revised May17,2013and August 07,2013;accepted October03,2013.Date of publication November20,2013; date of current version January15,2014.This work was supported in part by NSFC(No.61271289),The Ph.D.Programs Foundation of Ministry of Educa-tion of China(No.20110185110002),and National High Technology Research and Development Program of China(863Program,No.2012AA011503).The associate editor coordinating the review of this manuscript and approving it for publication was Prof.Zhihai(Henry)He.The authors are with School of Electronic Engineering,University of Electronic Science and Technology of China,Chengdu611731,China(e-mail: hlli@).Color versions of one or more of thefigures in this paper are available online at .Digital Object Identifier10.1109/TMM.2013.2291958extends the video coding applications to real time.This paper focuses on the fast CU selection for inter prediction in HEVC encoder. Recently,studies on reducing the computational complexity of HEVC encoders are reported.A fast CU size decision method was proposed in[5].The CUs were skipped in both the frame and CU levels.A fast HEVC CU size decision method was proposed in[6] by selecting the depth range and early termination using the neighbor and co-located CU information.An early detection of SKIP mode was proposed in[7]by checking the differential motion vector(MV) and the coded blockflag(CBF).A fast method which reduces the RDO computational complexity by top skip and early termination was proposed in[8].A Bayesian decision rule based algorithm was proposed for fast CU size decision in[9].In this paper,a pyramid motion divergence(PMD)based method is proposed to early skip the specific inter CUs in HEVC.PMD feature is derived to be used for CU selection.Before coding each frame,the op-ticalflow is fast estimated for the corresponding downsampled frame. For each CU,the PMD is evaluated as the variance of the opticalflows of the current CU and the sub-CUs.To reduce the computational com-plexity of RDO,a nearest neighboring(NN)like algorithm is used to determine the splitting type of each CU.The rest of the paper is organized as follows.An overview of the HEVC CU size decision is presented in Section II.The analysis of the PMD features and the proposed fast PMD-base CU size selection method are given in Section III.Experiments are provided in Section IV to verify the proposed method.Finally,we draw the concluding re-marks in Section V.II.O VERVIEW OF HEVC CU P REDICTIONIn HEVC,the pictures are divided into a sequence of coding tree units(CTUs).Each CTU can be recursively divided into four equally size CUs to form a quad-tree coding structure.The CUs may take sizes from88luma samples to6464.The CU is the basic coding region [10].The basic unit used for carrying the information in the prediction process is called the prediction unit(PU).Each CU may be predicted with one or more PUs depending on the partition modes.There are up to8partition modes for the inter-coded CUs.The optimum mode is chosen as the one with the minimal R-D cost,which is calculated as(1) where specifies the bit cost to be considered for mode deci-sion,and denote the sum of squared differ-ences(SSD)between the original blocks and the reconstructed blocks of the luma and chroma parts,respectively.The parameter is the weighting factor in order to weight the SSD of the chroma part,and denotes the Lagrangian multiplier.The CU size decision is recursively performed on the quad-tree[11]. We denote a CU in the depth as.That is,is the CTU.CUs with the size2N2N,can be divided into four N N CUs.However,CUs with the size88cannot be split any further.For each CU size,to judge whether a CU should be split or not,R-D costs1520-9210©2013IEEE.Personal use is permitted,but republication/redistribution requires IEEE permission.See /publications_standards/publications/rights/index.html for more information.are calculated in manner and manner respectively.The final R-D cost of is calculated as follows:(2) where denotes the minimal R-D cost of.and denote the R-D costs of the CU encoded in the manner and the manner,respectively.The is the sum of the R-D costs of the four sub-CUs in the depth,or the sum of the R-D cost of four44PUs.During the splitting of CUs from CTU, RDO technique is performed to select the optimum splitting which is with the smaller R-D cost.Therefore,the overall cost function should have a monotonic property.The problem of determining the optimum CTU splitting strategy can be implemented by judging whether CUs with each size should be split or not[12].The CU splitting(3) where denotes the optimal CU splitting of.III.P YRAMID M OTION D IVERGENCE B ASED CU S ELECTIONA.AnalysisTo accelerate the coding process,a number of approximations have been proposed,such as,SAD(sum of absolute differences)[13]and SSD between the current block and its corresponding prediction block [14].We will consider a CU in the frame predicted with a MVin the previous frame.The R-D cost denoted as can be approximated as the SSD which is given by(4) where denotes the value of pixel at position in frame ,and denotes the value of the reference pixel with the MV in frame.Without the consideration on illumination effects,a common assumption is that corresponding pixels should have the same value.It is expressed by(5) where is the corresponding MV of pixel at the position .Noted that the set of the corresponding MVs of pixels in the CU is denoted as.That is.Then(4)can be rewritten as(6) The corresponding image intensities are assumed to follow afirst-order stationary Markov process[15],which are known to be a simple and good model for a highly correlated signal,as(7) where is the correlation coefficient between the pixels and in frame denotes a zero-mean white-noise process.The statistical experiments in[16]show that although the correlation of adjacent pixels in High resolution videos is slightly higher than that of the low resolution videos,the frame also can be approximated by thefirst-order Markov process.Without regard to the noise,(7)can be rewritten as(8)To simplify the derivation,the correlation coefficients are assumed to obey a generalised correlation model[17],in which the correlation only depends on the distance of the pixels.can be described as,(9) where is the correlation coefficient such that and operation denotes as the distance between two pixels.The distance be-tween and can be calculated as.Then, (8)can be rewritten as(10) Equation(6)can be changed as(11)Performing afirst-order Taylor expansion on,we obtain(12) Equation(11)can be rewritten as(13) For each CU,the motion estimation is tofind the best matching refer-ence block with the optimal MV such that the R-D cost is the minimal. It is easy to be derived that the mean of is the optimal MV,that is(14)where is the mean of.In(13),we obtain,(15) Therefore,the minimal R-D cost is positively correlated with the vari-ance in the set of real MVs.B.ObservationThe statistical characteristics of MVs,such as the variance and mean, are widely used in video processing[18].Fig.1shows the statistical data which reflect the relationship between the R-D costs and the cor-responding variances of the real pixel MVs.The R-D costs are the costs of6464CUs.The real pixel MVs can be derived by the opticalflow estimations[19],[20].The outputs of these algorithms are the instan-taneous velocities of pixels.The real MVs of the statistical data are estimated with the successive over-relaxation(SOR)based method.1 To compress the ranges of the data,we show the data on logarithmic scale.It is evident that the R-D cost is monotonously increasing with the variance of pixel MVs.This observation can further validate the effectiveness of(15).In inter prediction,the large variances of the MVs are related to the strong and diverse motion.However,the small variances are corre-sponded with the weak and consistent motion.Fig.2shows the CU splitings and their corresponding PU modes of the2nd frame in se-quence BQMall.As shown in the Fig.2,when the motion is intense and dispersive,the block tends to be encoded as small CUs.However,when 1/celiu/OpticalFlow/Fig.1.The Relation between R-D costs and Variances of Pixel MVs (with sequences:(a)BQSquare,(b)BQMall).Fig.2.The CU splittings and their PU modes (with the BQMall sequence,the 2nd frame).Transparent red:skip mode,Transparent:inter mode.Transparent blue:intramode.Fig.3.The relationship between CU splittings and optical flow (with the BQ-Mall sequence,the 2nd frame).the motion is weak and homogeneous,the block tends to be encoded as large CUs.Fig.3shows the relationship between the CU splittings and the optical flow.The optical flow with different motion directions are shown with different colors.Different motion magnitudes are la-beled with different saturations.The regions of backgrounds tend to be coded as skip modes and large CUs.The boundary regions of moving bodies which include diverse motion tend to be coded as inter modes and small CUs.The observations show that the diversity of the motion affects the CU splittings.That is,the higher the degree of the motion divergence,the smaller CUs will be coded.C.Pyramid Motion Divergence FeaturesFrom(3),weobserve that a pair of R-D costscan lead to a CU splitting.TheR-D cost is the sum of the R-D costs of the four sub-CUs which is described in (2).We consider that the CUsplittings (16)wheredenotes a mapping operation.We de fine a feature called Pyramid Motion Divergence (PMD).As shown in Fig.4,the PMD feature of a CU is calculated as the variance of the pixel MVs in and the variance ofFig.4.PMD feature representation.Fig.5.PMD similar degrees between CUs and the corresponding probabilities to be encoded with the same CU splitting.pixel MVs in the four sub-CUs.Thus,PMD can be denoted as,where,and is the th sub-CU of ,i.e.,is the variance of ,in which is the mean MV of .From (15),we observe that the minimal R-D cost is positively corre-lated with the variance in the set of real MVs.Thus,it can be considered that the CUs with similar PMDs tend to be encoded with the same CU splittings.That is,(17)whereand are two PMD features of two CUs in the same size.denotes that is in the neighborhood of ,i.e.,andare similar.Fig.5shows the normalized PMD similar degrees of CU pairs and the corresponding probabilities of them to be encoded with the same CU splitting.The CUs are extracted from four sequences (BasketballPass,PartyScene,BQTerrace,and Traf fic)which are with difference resolutions.The CUs are paired with each other for each se-quence.For statistical convenience,similar degrees of the CU pairs are normalized into the range [0,1].The normalized PMD similar degree between and is de fined as(18)That is,means and are the most similar PMD to each other.The probabilities are calculated in 20intervals of .It is evident that CUs with the similar PMDs have the high probabilities to be encoded with the same splitting.D.Optical Flow EstimationIn order to determine the CU splittings with the PMD features,the MVs of pixels should be extracted before coding each frame.The op-ticalflow estimation is a typical motion estimation algorithm.MVs of all the pixels in the frame can be derived by the opticalflow estima-tion methods.Generally,the opticalflow calculation requires excessive time.The proposed algorithm focuses on reducing the computational complexity.Thus,allocating large computation cost on the opticalflow estimation is not necessary.In this paper,the opticalflow is estimated with the1/16downsampled frames to reduce the complexity.That is, the low resolution frames are downsampled by a factor of4in both the horizontal and vertical directions.The reason why the down sampling factors are4is that44is the minimum PU size in the HEVC.The pixels in the44blocks are limited to be predicted as a whole.Fur-thermore,the number of MVs with the downsampled version is less than the original resolution.For example,in the original versions,each pixel has a vector.The6464CUs include ve-locity vectors.However,after the downsampling,each44block has a vector.There are only vectors for the6464CU. The reduced number of the MVs results in the computation of PMDs fast.E.CU Splittings DeterminationBased on the analysis in Section III-C,we know that the CUs with the similar PMDs are generally encoded with the same splittings.After the estimation of the MVs,the PMD is easy to be calculated as shown in Fig.4.The similar degree of CUs is measured by the Euclidean distance of the PMDs.We will consider two CUs and.The similar degree between and denoted as can be calculated as(19) where and are the PMD feature vectors of and,respectively. With the calculated PMDs,the CU splittings can be determined by comparing with the PMDs of the previous coded CUs.This is a NN like method.In order to determine whether a CU is split or not,a searching of the similar CUs is performed on the previous coded CUs. The most similar CUs are found by the searching process.The split-ting types of the similar CUs may include:and.The traditional NN algorithms usually select the majority type directly. However,in this paper,the PMD is derived without consideration on the other PU modes except the mode.It is difficult to distinguish whether a CU should be predicted with the other PU modes except or split to four sub-CUs.It can be considered that there is interference if not all the NN are with the same splitting type.nevertheless,these PU modes are rarely used in inter prediction. Thus,to reduce the error rate,we do not select the majority type directly if not all the NN are with the same splitting type.The CU splittings are determined as(20) where and denote the lengths of and CUs of the most similar CUs respectively,such that.The case means that all the similar CUs are encoded in manner.If ,then the current CU is determined to be coded in manner.That is,it will be only predicted in the current depth and not be split recursively.Analogously,the case means that all the similar CUs are CUs.If,then the current CU is de-termined to be coded in manner.It will skip the current depth and recursively predict with four sub-CUs.Otherwise(i.e.,and ),the current CU is determined to the CUs.For these CUs,it will be predicted in boththe and manners.That is,there is no time saving for the CUs.Fig.6.The Flow Chart of the proposed algorithm.Since CU splittings are determined by the comparing with the pre-vious CUs,the number of compared previous CUs should be limited such that less time is required by the comparing process.Two First-in First-out(FIFO)queues with the same preset lengths are used for storing and previous CUs,respectively.Furthermore,to ensure that there are enough previous CUs for comparing,thefirst“P”frame are predicted in both the two manners.Theflowchart of the pro-posed algorithm is shown in Fig.6.IV.E XPERIMENTAL R ESULTSIn this section,the performance of the proposed fast CU size selection algorithm is evaluated in terms of the difference in compu-tational time,Bjontegaard Delta bit rate(BDBR)and BDPSNR(dB) [22].The performance gain or loss is measured with respect to the HEVC reference software platform(HM9.0).2The experiments are implemented based on the“encoder_lowdelay_P_main”(LP-Main) and“encoder_lowdelay_main”(LB-Main)settings as stipulated by the common condition which was proposed in[23].The detailed test conditions are as follows.1)Max and CU sizes are6464and88,respectively.2)Max and Min Transform Unit(TU)sizes are3232and44,respectively.3)RDO quantization(RDOQ)is enabled.4)Fast search is enabled,and search range is64.The configurationfiles were put in the“cfg”folder of the HM9.0soft-ware package.The quantization parameters(QP)values are set to22, 27,32,and37for a wide range of qualities and bit-rates.To simplify the test,all the“P”and“B”frames are encoded with the same QP offset 3.The simulation is performed on a computer with an Intel Xeon(R) 2.30GHz processor.The opticalflow are estimated with the method provided by Liu.3The parameters of the opticalflow estimation are set to the default values.In the NN like method,is set to5.The length of FIFO queues are set to300.The time savings are calculated as(21) where denotes the overall encoding time of the proposed fast algorithm and,denotes the total encoding time of the reference method.Table I shows the results base on the“LP-Main”setting.The pro-posed algorithm is compared with the state-of-the-art algorithms which are the Shen’s algorithm[6],and the Lee’s algorithm[21].Both the al-gorithms[6]and[21]are devised based on the observations and sta-tistical experiments.However,the proposed method is designed based on the PMD feature which is derived through the theoretical analysis. The proposed method achieves the average time saving about43%.The 2https://hevc.hhi.fraunhofer.de/svn/svn HEVCSoftware/tags/HM-9.0rc2/3/celiu/OpticalFlow/TABLE IR ESULTS OF THE P ROPOSED A LGORITHM C OMPARED W ITH THE S TATE -OF -THE -A RT F AST A LGORITHMS (LP-M AIN)TABLE IIT IME S AVINGS ,B IT -R ATE I NCREMENT AND PSNR L OSS C OMPARED W ITH O RIGINAL A LGORITHM (LB-M AIN)time saving goes up to 61%in the sequence Vidyo4.A slight BD-rateincrement and PSNR loss for luma are 1.9%and,respectively.It means that the proposed algorithm achieves almost the same R-D performance as the full-searching scheme employed in the HM refer-ence software.The time saving for the Lee’s algorithm is only 19%although the BD-rate increment and PSNR loss are minor.The time saving of the proposed algorithm is signi ficantly greater than that of the Lee’s algorithm.That is,the proposed algorithm can further reduce the redundancy complexity.For the Shen’s algorithm,the time saving,the BD-rate increment and BDPSNR for luma are 39%,1.59%and,respectively.The proposed method is slightly faster than theShen’s algorithm while the performance loss is almost the same.Table II shows the results of the proposed algorithm based on the “LB-Main”setting.The average time saving is approximately 40%,and the BD-rate increment for luma is only 2.21%.Therefore,the pro-posed fast CU decision method are effective for both the “LP-Main”and “LB-Main”settings.The fifth column of Table II shows the per-centages of the time consumed by the optical flow estimation (OFE time)as compared with the overall time of the proposed method.The average percentage of OFE time is9%.Fig.7.The R-D performances ofBasketballDrill.Fig.8.The Time Savings and BD-rate Increments with different .Fig.7shows a typical example of RD curves compared with the reference software.As shown in Fig.7,there is no obvious performance loss for the proposed algorithm no matter for the low rate or high rate.Fig.8shows the time savings and BD-rate increments with different .When ,it is the typical nearest neighboring algorithm.As the value of is decreasing,the time saving is increased.Meanwhile,the BD-rate increment is increased.To get a acceptable trade-off between the computational complexity and coding performance,we set in the test experiments.V .C ONCLUSIONIn this paper,a PMD-based fast CU size selection method is pro-posed.The implicit relation between the motion divergence and the R-D cost is investigated.A NN like method is presented to determine the CU splittings.The proposed system search the similar CUs to select the optimum CU splitting.The ef ficiency of the models is evaluated with respect to the time savings and the coding quality.The experi-mental results show that the proposed algorithm reduces the encoding time signi ficantly and also has a negligible coding loss.R EFERENCES[1]T.Wiegand,G.Sullivan,G.Bjontegaard,and A.Luthra,“Overviewof the H.264/A VC video coding standard,”IEEE Trans.Circuits Syst.Video Technol.,vol.13,no.7,pp.560–576,Jul.2003.[2]H.Li,G.Liu,Z.Zhang,and Y.Li,“Adaptive scene-detection algorithmfor VBR video stream,”IEEE Trans.Multimedia,vol.6,no.4,pp.624–633,Aug.2004.[3]H.Li,K.N.Ngan,and Q.Liu,“Faceseg:Automatic face segmentationfor real-time video,”IEEE Trans.Multimedia,vol.11,no.1,pp.77–88, Jan.2009.[4]W.-J.Han et al.,“Improved video compression efficiency throughflexible unit representation and corresponding extension of coding tools,”IEEE Trans.Circuits Syst.Video Technol.,vol.20,no.12,pp.1709–1720,Dec.2010.[5]J.Leng,L.Sun,T.Ikenaga,and S.Sakaida,“Content based hierarchicalfast coding unit decision algorithm for HEVC,”in Proc.Int.Conf.Mul-timedia and Signal Processing,May2011,vol.1,pp.56–59.[6]L.Shen,Z.Liu,X.Zhang,W.Zhao,and Z.Zhang,“An effective cusize decision method for HEVC encoders,”IEEE Trans.Multimedia, vol.15,no.2,pp.465–470,Feb.2013.[7]J.Kim,J.Yang,K.Won,and B.Jeon,“Early determination of modedecision for HEVC,”in Proc.Picture Coding Symp.(PCS),May2012, pp.449–452.[8]M.Cassa,M.Naccari,and F.Pereira,“Fast rate distortion optimiza-tion for the emerging HEVC standard,”in Proc.Picture Coding Symp.(PCS),May2012,pp.493–496.[9]X.Shen,L.Yu,and J.Chen,“Fast coding unit size selection for HEVCbased on bayesian decision rule,”in Proc.Picture Coding Symp.(PCS), May2012,pp.453–456.[10]B.Bross,W.-J.Han,O.Jens-Rainer,S.Gary J,and W.Thomas,“Highefficiency video coding(HEVC)text specification draft7,”in Proc.9th Meeting,no.JCTVC-I1003,Apr.2012.[11]I.-K.Kim,K.McCann,K.Sugimoto,B.Bross,and W.-J.Han,“Hm7:High efficiency video coding(HEVC)test model7encoder descrip-tion,”in Proc.9th Meeting,No.JCTVC-I1002,Geneva,Switzerland, Apr.2012.[12]J.Xiong,“Fast coding unit selection method for high efficiency videocoding intra prediction,”Opt.Eng.,vol.52,no.7,pp.071504–071504, 2013.[13]W.Lin,K.Panusopone,D.M.Baylon,M.-T.Sun,Z.Chen,and H.Li,“A fast sub-pixel motion estimation algorithm for H.264/A VC video coding,”IEEE Trans.Circuits Syst.Video Technol.,vol.21,no.2,pp.237–242,Feb.2011.[14]L.-C.Chang,C.-H.Kuo,and B.-D.Liu,“A two-stage rate controlmechanism for RDO-based H.264/AVC encoders,”IEEE Trans.Cir-cuits Syst.Video Technol.,vol.21,no.5,pp.660–673,May2011. [15]J.Han,A.Saxena,and K.Rose,“Towards jointly optimal spatialprediction and adaptive transform in video/image coding,”in Proc.IEEE Int.Conf.Acoust.Speech Signal Process.,Mar.2010,pp.726–729.[16]J.Dong,K.N.Ngan,C.-K.Fong,and W.-K.Cham,“2-D order-16integer transforms for HD video coding,”IEEE Trans.Circuits Syst.Video Technol.,vol.19,no.10,pp.1462–1474,Oct.2009.[17]W.Mauersberger,“Generalised correlation model for designing2-di-mensional image coders,”Electron.Lett.,vol.15,no.20,pp.664–665, Sep.1979.[18]G.Dane and T.Q.Nguyen,“Motion vector processing for frame rateup conversion,”in Proc.ICASSP,May2004,vol.3,pp.309–312. [19]C.Liu,W.T.Freeman,E.H.Adelson,and Y.Weiss,“Human-assistedmotion annotation,”in Proc.IEEE puter Vision and Pattern Recognition,Jun.2008,pp.1–8.[20]T.Brox,A.Bruhn,N.Papenberg,and J.Weickert,“High accuracyopticalflow estimation based on a theory for warping,”in Com-puter Vision-ECCV2004.New York,NY,USA:Springer,2004, pp.25–36.[21]H.S.Lee,K.Y.Kim,T.R.Kim,and G.H.Park,“Fast encoding algo-rithm based on depth of coding-unit for high efficiency video coding,”Opt.Eng.,vol.51,no.6,pp.1–11,Jun.2012.[22]G.Bjontegaard,“Calculation of average PSNR differences betweenRD curves,”in No.ITU-T SC16/Q6,VCEG-M33,Austin,TX,USA, Apr.2001.[23]F.Bossen,“Common HM test conditions and software reference con-figurations,”in Proc.11th Meeting,no.JCTVC-K1100,Oct.2012.On Designing Paired Comparison Experiments for Subjective Multimedia Quality AssessmentJong-Seok Lee,Member,IEEEAbstract—This paper investigates the issue of designing paired compar-ison-based subjective quality assessment experiments for reliable results.In particular,the convergence behavior of the quality scores estimated from paired comparison results is considered.Via an extensive computer simu-lation experiment,the estimation performance in terms of the root mean squared error,the rank order correlation coefficient,and the change of the estimated scores with respect to the number of subjects are mathemati-cally modeled.Furthermore,it is confirmed that the models coincide with the theoretical convergence behavior.Issues such as the effect of human errors and the underlying distribution of the true quality scores are also examined.Index Terms—Quality of experience(QoE),subjective quality assess-ment,paired comparison,Bradley-Terry model,mean opinion score (MOS).I.I NTRODUCTIONMultimedia quality assessment plays an important role in devel-oping and evaluating many multimedia applications.In particular, since it has been demonstrated that long used signal-oriented measures such as peak signal-to-noise ratio(PSNR)are not well correlated with the way that humans perceive quality[1],the issue of mea-suring quality of experience(QoE)in the users’viewpoint has been considered crucial for successful user-centered multimedia services. Subjective quality assessment,in which human observers are asked to provide QoE judgment of given multimedia contents,is an important step toward understanding and imitating humans’quality perception mechanisms.Various multimedia technologies can benefit from results of subjective quality assessment,e.g.,developing objective quality metrics,developing and evaluating multimedia processing algorithms, and determining content distribution strategies.The subjective test methodology is an important factor affecting the results and thus must be designed appropriately for the given goal of an experiment.Among the standardized methodologies[2],single stimulus or double stimulus methodologies have been popularly used, where one stimulus or a pair of stimuli are presented to a subject one by one and the subject rates the stimulus(or stimuli)in a pre-defined dis-crete or continuous rating scale.However,these methodologies some-time suffer from difficulty in obtaining reliable subjective data due to problems such as ambiguity in definition of scales,dissimilar inter-pretations of a scale among users,etc.,especially when quality dif-ference between stimuli is small,individual quality levels are difficult Manuscript received March10,2013;revised July04,2013and September 29,2013;accepted October08,2013.Date of publication November26,2013; date of current version January15,2014.This work was supported in part by the Ministry of Science,ICT and Future Planning(MSIP),Korea,under the IT Con-silience Creative Program(NIPA-2013-H0203-13-1002)supervised by the Na-tional IT Industry Promotion Agency,and in part by the Basic Science Research Program(2013R1A1A1007822)funded by the MSIP through the National Re-search Foundation of Korea.The associate editor coordinating the review of this manuscript and approving it for publication was Dr.Sheng-Wei(Kuan-Ta) Chen.The author is with the School of Integrated Technology,Yonsei University, 406-840Incheon,Korea(e-mail:jong-seok.lee@yonsei.ac.kr).Digital Object Identifier10.1109/TMM.2013.22925901520-9210©2013IEEE.Personal use is permitted,but republication/redistribution requires IEEE permission.See /publications_standards/publications/rights/index.html for more information.。

高效率视频编码快速模式判决算法

高效率视频编码快速模式判决算法
-_ _
同 效 率 视 频 编 码 快 速 模 式 判 决 算 法
李 维 ,张 和仙 ,杨付 正
( 西 安 电子 科 技 大学 综 合 业 务 网理 论 及 关 键 技 术 国 家重 点 实 验 室 , 7 1 0 0 7 1 , 西安)
I L - _ Ej
摘要 :为 了降低 高效率视频 编码 ( HE VC) 的编 码 复杂 度 , 提 出一种 新 的 快速模 式判 决算 法 。考虑 到视 频 帧的纹理 特性 和编码 中所 采用 的量化参 数影 响最优 编码单 元 ( C U) 模 式 的选择 , 首先提 取前
均只上 升 了 0 . 6 9 。
关键词 :高效率视 频编码 ; 快速 模 式判决 ; 编码单 元 ; 量化 参数
中 图 分 类 号 :T N9 1 4 . 4 2 文 献 标 志 码 :A 文 章 编 号 :0 2 5 3 — 9 8 7 X( 2 0 1 3 ) 0 8 - 0 1 0 4 — 0 6
Ab s t r a c t : A f a s t mo de de c i s i o n a l g or i t hm i s pr o p os e d t o r e du c e t he c o m pl e xi t y i n hi gh e f f i c i e n c y v i de o c od i ng ( H EV C) e n c o d e r . T he po t e nt i a l e f f e c t s of f r a me t e xt u r e a n d i t s r e l a t e d q ua n t i z a t i o n
A Fa s t Mo d e De c i s i o n Al g o r i t h m f o r Hi g h Ef f i c i e n c y Vi d e o Co d i n g

3D-HEVC深度图帧内编码快速算法

3D-HEVC深度图帧内编码快速算法

3D-HEVC深度图帧内编码快速算法韩雪;冯桂;曹海燕【摘要】3D-HEVC is a newly emerging stereo video coding standard,it adopted the Multi-view plus depth format,which requires to encode the texture map and corresponding depth map at the same time.3D-HEVC also developed a series of new methods for depth map.To reduce computational burden of depth map,we propose an efficient fast algorithm for depth map intra coding based on the quad-tree splitting structure and intra prediction process.We apply Otsu's method to each depth level to get the maximum inter class variance,and analyse the correlation between maximum inter class variance and optimal optimum partition result.We use Otsu's method to stop further splitting process of smooth CUs and simplify the intra prediction process.We further optimize our algorithms by using rate distortion cost and the correlation between different layers of CUs.Experimental results show that our proposed algorithm can save 40.1% time consumption on average with negligible loss of coding performance.%编码3D视频的3D-HEVC编码标准采用多视点加深度图的编码格式,新增的深度信息使编码复杂度剧增.本文针对编码块(Coding Unit,CU)的四叉树分割模型和帧内预测模式,提出了深度图帧内编码的快速算法.用Otsu's算子计算当前CU的最大类间方差值,判断当前CU是否平坦,对平坦CU终止四叉树分割和减少帧内模式的遍历数目.根据子CU与上一层CU的相似性,利用已编码的上一层CU对提前终止CU分割算法做优化.本算法与原始3D-HEVC算法相比减少40.1%的编码时间,而合成视点的质量几乎无变化.【期刊名称】《信号处理》【年(卷),期】2018(034)006【总页数】8页(P680-687)【关键词】3D-HEVC;深度图;Otsu's算子;模式决策;四叉树结构【作者】韩雪;冯桂;曹海燕【作者单位】华侨大学信息科学与工程学院,福建厦门361021;华侨大学信息科学与工程学院,福建厦门361021;华侨大学信息科学与工程学院,福建厦门361021【正文语种】中文【中图分类】TN919.811 引言3D视频的编码和传输多采用新兴的3D-HEVC编码标准,基于传统的2D视频编码标准,即高性能视频编码(High Efficiency Video Coding, HEVC)标准,采用多视点视频加深度图(Multi-view Video Plus Depth, MVD)的编码格式[1],能同时编码多个视点的纹理图及对应深度图,使用基于深度图的绘制(Depth-image-based rendering, DIBR)技术就能合成任意视点的视频信息[2]。

高效率视频编码帧内预测编码单元划分快速算法

高效率视频编码帧内预测编码单元划分快速算法

高效率视频编码帧内预测编码单元划分快速算法齐美彬;陈秀丽;杨艳芳;蒋建国;金玉龙;张俊杰【摘要】高效率视频编码(HEVC)采用基于编码单元(CU)的四叉树块分区结构,能灵活地适应各种图像的纹理特征,显著提高编码效率,但编码复杂度大大增加,该文提出一种缩小深度范围且提前终止CU划分的快速CU划分算法。

首先,在学习帧中,基于Sobel边缘检测算子计算一帧中各深度边缘点阈值,缩小后面若干帧中CU遍历的深度范围;同时,统计该帧中各CU划分为各深度的率失真(RD)代价,计算各深度的RD代价阈值。

然后,在后续视频帧中,利用RD代价阈值在缩小的深度范围内提前终止CU划分。

为了符合视频内容的变化特性,统计参数是周期性更新的。

经测试,在平均比特率增加仅1.2%时,算法时间平均减少约59%,有效提高了编码效率。

%High Efficiency Video Coding (HEVC) adopts a quadtree-based Coding Unit (CU) block partitioning structure which is flexible to adapt to various texture characteristics of images and can improve the coding efficiency significantly;however, it also introduces a great computation complexity. This paper proposes a fast CU splitting algorithm which can narrow CU depth range and early terminate the CU splitting. Firstly, in learning frame, based on the Sobel edge detection operator, this study obtains edge point threshold at each CU depth level to narrow the CU traversal depth range in the next several frames. Meanwhile, this paper introduces the statistical of Rate-Distortion (RD) cost of CU in the frame and calculates RD threshold of each depth level. Then, in the subsequent video frames, RD threshold is utilized to terminate early the CU splitting in the narrow depth range. The statistical parameters are periodicallyupdated to cope with varying video content characteristics. The experimental results show that the proposed algorithm is able to save 59%encoding time on average with only 1.2%increasing on bit-rate which can observably improve the coding efficiency.【期刊名称】《电子与信息学报》【年(卷),期】2014(000)007【总页数】7页(P1699-1705)【关键词】高效率视频编码(HEVC);帧内预测;Sobel算子;率失真代价【作者】齐美彬;陈秀丽;杨艳芳;蒋建国;金玉龙;张俊杰【作者单位】合肥工业大学计算机与信息学院合肥 230009; 教育部安全关键工业测控技术工程研究中心合肥 230009;合肥工业大学计算机与信息学院合肥230009;合肥工业大学电子科学与应用物理学院合肥 230009;合肥工业大学计算机与信息学院合肥 230009; 教育部安全关键工业测控技术工程研究中心合肥230009;合肥工业大学计算机与信息学院合肥 230009;合肥工业大学计算机与信息学院合肥 230009【正文语种】中文【中图分类】TN919.8高效率视频编码(High Efficiency Video Coding, HEVC)是目前最新的视频编码标准[1],其编码效率大大优于以前的标准。

利用深度学习的HEVC帧内编码单元快速划分算法

利用深度学习的HEVC帧内编码单元快速划分算法

2021年2月第2期Vol. 42 No. 2 2021小型微 型计算 机系统Journal of Chinese Computer Systems利用深度学习的HEVC 帧内编码单元快速划分算法易清明,林成思,石敏(暨南大学信息科学技术学院,广州510632)E-mail :linchengsil23@ stu2017. jnu. edu. cn摘 要:新一代视频编码标准高效视频编码(High Efficiency Video Coding,HEVC)中编码单元(Coding Unit,CU )大小不同的特性使得编码效率得到显著提升,但同时带来了极高的计算复杂度.为了去除CU 划分中多余的计算从而降低编码复杂度,本 文提出了一种利用深度学习的编码单元快速划分算法.首先使用原始视频亮度块及编码信息建殳了一个HEVC 中CU 划分的 数据库,用于接下来本文深度学习神经网络的训练.然后,为了更好地砧合编码单元划分的层级结构,本文提出了一种基于In ­ception 模块的神经网络结构,使之内嵌于HEVC 编码框架中对编码单元的划分进行提前预测,有效地去除了 All Intra 配置下中冗余的CU 划分计算.实验结果表明,本文提出的算法与HEVC 官方测试模型(HM16. 12)相比,编码时间平均降低了61.31% ,而 BD-BR 与 BD-PSNR 仅为 1.86% 和O 13dB.关键词:HEVC ;编码单元划分;深度学习;Inception 模块中图分类号:TP391文献标识码:A 文章编号:1000-1220(2021 )02-0368-06Fast HEVC Coding Units Partitioning Algorithm Based on Deep LearningYI Qing-ming , LIN Cheng-si , SHI Min(College of Information Science and Technology ,Jinan University ,Guangzhou 510632,China)Abstract : The different size of the coding unit ( CU ) in the new generation of videocoding standard , highefficiency video coding (HEVC ) makes the coding efficiency significantly improved , but at the same time brings high computational complexity. In order toremove the redundant calculation in CU partition and reduce the coding complexity , this paper proposes a fast partitioning algorithmbased on deep learning. Firstly , a database of CU partition in HEVC is established by the original video luminance block and coding in ­formation ,which provides data guarantee for trainingthedeep learning model. Then , in order to better fit the hierarchical structure of coding unit partition , this paper uses a neural network structure based on Inception Module , which isembeddedin the HEVC coding framework to predict the partition of coding units in advance , effectively eliminating the redundant CU partition calculation in All Intraconfiguration. The experimental results show that compared with the HEVC official test model ( HM16. 12) ,the proposed algorithm re ­duces the coding time by an average of 61.31% .while the BD-BR and BD-PSNR are only 1.86% and -0.13dB.Key words : HEVC ; coding unit partition ; deep learning ; Inception module1引言H. 265/HEVC ( High Efficiency Video Coding ,高效视频编码)是ITU-T 和ISO/IE 在2013年联合发布的继H. 264/ AVC 之后的新一代高效视频编码压缩标准⑴.HEVC 标准致 力于在保证视频编码质量的前提下,在编码效率上比AVC 提升一倍,为此HEVC 引入了一系列新的技术.但同时编码复杂度急剧增加,使得在多媒体应用上HEVC 存在对设备的运算能力要求的较高门槛,因此HEVC 亟需通过算法优化降低编码复杂度.在HEVC 中,为了适应视频图像平缓区域和复杂区域的不同编码特性,一幅图像可以被划分为互不重叠的编码树单 元[21(CTU),在CTU 内部采用基于四叉树的循环分层结构 划分为若干个CU,这个递归过程占据了 HEVC 编码时间的80%以上,其原因在于对于每一个CU 划分的可能方案 HEVC 采用了暴力递归的做法遍历每一种方案通过计算RD代价(RD-cost)以寻求最优码率,这个过程存在大量的冗余 计算⑶.自HEVC 标准发布以来,许多针对HEVC 编码单元快速 划分的优化方案被陆续提出,不同程度地减少了 CU 划分的 冗余计算.文献[4]所提出的算法定义了基于RD 代价(RD-cost) 的贝叶斯决策方法, 在每个 CU 深度级别执行提前CU分裂和修剪,减少CU 分割遍历次数.文献[5]利用前一帧以 及当前帧相邻CU 的划分结果,对当前帧的CU 划分方式进行预判.文献[6]利用了输入图像纹理梯度等特性,利用纹理复杂度自适应地跳过一些CU 大小.文献[7 ]提供自适应决策 能力来扩展基于关键点的CU 深度决策(KCD)方法,一定程度上降低了编码计算复杂度.以文献[8,9]为代表的基于支收稿日期;2020-02-28收修改稿日期:202045-12基金项目:国家自然科学基金项目(61603153)资助;广州市“羊城创新创业领军人才支 持计划”之创新领军人才项目(领军人才2019019)资助;广州市产业技术重大攻关计划项目(201802010028)资助.作者简介:易清明,女,1965 年生,博士,教授,研究方向为IC 设计、视频编码等;林成思,男,1995年生,硕士,研究方向为视频编码;石敏,女,1977年生,博士,副教授,研 究方向为图像处理、视频编解码等.2期易清明等:利用深度学习的HEVC帧内编码单元快速划分算法369持向量机(SVM)的快速CU大小决策算法提取图像特征来判断CU复杂度,然后基于SVM建立CU大小决策的分类器结构.文献[10]同样使用SVM来进行预测帧内CU划分,同时结合帧内角度预测模式,进一步减少帧内预测的计算复杂度.另外,随着深度学习的不断发展,很多深度学习方法被应用到视频编解码的领域.文献[11]使用了一个两层的卷积神经网络结构来预测单个CU是否划分的二分类问题.文献[12]使用了一个更深的3层CNN结构来预测整个CTU的模型,并建立了一个针对CU划分的数据库CPIH,改善了网络的学习能力.上述帧内CU划分快速选择算法都能够在一定程度上减少CU划分冗余计算,缩减编码时间,但仍存在自身缺陷.例如早期启发式方法如文献[4-7]依赖与人为指定的经验特征,无法挖掘CU划分过程中的深层次特征.文献[8-10]利用的SVM的方法学习能力有限,相比深度学习模型在预测能力上稍显不足,准确率不能得到保证.而深度学习方法如文献[11,12]等也存在着一些问题,如分开使用3个同样的CNN来分别预测3种大小的CU划分,在不同深度的CU划分中没有建立联系;同时每次网络只判断一层的CU划分情况,需要反复调用多次才能预测出整个CTU的划分,仍有改进的空间.本文针对以上问题,提出一种基于深度学习编码单元快速划分算法.首先与一些三级分类器方法需要反复调用分类器不同的是,本文方法一次性预测整个CTU的CU划分情况,将同一个CTU中不同深度的CU划分建模为一个整体;另外,为了更好地适应CU划分的层级结构,本文算法使用了一种基于Inception模块的神经网络结构,使之内嵌于HEVC编码框架中,直接根据亮度像素值对编码单元的划分进行提前预测,有效地去除了All Intra配置下中冗余的CU 划分计算.2基于深度学习的快速编码单元划分算法与上一代编码标准H.264相比,HEVC新引入了通过四叉树方式灵活划分的树形结构单元(CTU),-个CTU大小一般为64x64,可以划分为不同大小编码单元(CU).为了贴合不同视频图像的纹理特征,每个CU共设有4种可能划分尺寸:64x64、32x32、16X16和8x8.图1为CTU划分为图1HEVC CU划分结构示意图Fig.1HEVC CU Splitting StructureCU的示意图.本文深度学习模型以整个CTU的亮度块像素作为输入,各个尺寸的CU分割模式作为输出,经过训练学习以获得较为准确的CU划分预测效果.2.1划分模式建模根据HEVC中CTU的分割结构,经过划分之后一个CU 最多可能有4种大小,包括64x64,32x32,16x16和8x8,分别对应于CU划分的深度0、1、2和3.如图2所示,我们用Ze10,1,2,3)来表示不同深度,用y,来表示在深度/-1的CU是否要划分为深度为I的子CU这一划分决策,例如几代表64x64的CU是否要划分为32x32的子CU这一决策.对于每一个大小316X16的CU来说,都存在一个“是否要划分为更小的CU”的决策.为了更好地区分不同深度、同一深度不同位置的CU划分决策,本文分别用几、{%.,}:“、{".,显”。

HEVC学习(十二)——CU的最终划分

HEVC学习(十二)——CU的最终划分

HEVC学习(⼗⼆)——CU的最终划分相信会有不少⼈对如何确定CU最终的划分有所困惑(包括我在内,刚开始接触时也不知道该怎么做),我觉得很⼤的⼀个原因就是CU是递归划分的,这就导致在寻找确定最佳分割位置时⽐较困难。

其实,解决问题的办法说难也不难,关键在于思路的转换,既然对于xCompressCU中是如何保存划分模式的觉得难以理解,何不跳出这个⼩圈⼦寻找新的⽅法呢?我们可以从解码器的⾓度来考虑,因为最终编码后的码流是要经过解码器解码的,解码器事先也是不知道CU到底最终是如何划分的。

因此,可以推断,编码器必然会保存下这个信息,⾄少是提⽰信息。

不妨参考encodeCU这个函数的实现,因为它是最终将信息编码成码流的函数。

该函数调⽤的是xEncodeCU来完成实际⼯作,截取它当中其中⼀段对我们有⽤的代码:// We need to split, so don't try these modes.if(!bSliceStart&&( uiRPelX < pcSlice->getSPS()->getPicWidthInLumaSamples() ) && ( uiBPelY < pcSlice->getSPS()->getPicHeightInLumaSamples() ) ) {m_pcEntropyCoder->encodeSplitFlag( pcCU, uiAbsPartIdx, uiDepth );}不错,encodeSplitFlag就是⽤于编码CU分割信息的函数,它的实现如下:// Split modeVoid TEncEntropy::encodeSplitFlag( TComDataCU* pcCU, UInt uiAbsPartIdx, UInt uiDepth, Bool bRD ){if( bRD ){uiAbsPartIdx = 0;}if( !bRD ){if( pcCU->getLastCUSucIPCMFlag() && pcCU->getIPCMFlag(uiAbsPartIdx) ){return;}}m_pcEntropyCoderIf->codeSplitFlag( pcCU, uiAbsPartIdx, uiDepth );}对我们有⽤的函数是最后的codeSplitFlag函数,它的实现如下:Void TEncSbac::codeSplitFlag ( TComDataCU* pcCU, UInt uiAbsPartIdx, UInt uiDepth ){if( uiDepth == g_uiMaxCUDepth - g_uiAddCUDepth )return;UInt uiCtx = pcCU->getCtxSplitFlag( uiAbsPartIdx, uiDepth );UInt uiCurrSplitFlag = ( pcCU->getDepth( uiAbsPartIdx ) > uiDepth ) ? 1 : 0;assert( uiCtx < 3 );m_pcBinIf->encodeBin( uiCurrSplitFlag, m_cCUSplitFlagSCModel.get( 0, 0, uiCtx ) );DTRACE_CABAC_VL( g_nSymbolCounter++ )DTRACE_CABAC_T( "\tSplitFlag\n" )return;}可以看到最为有⽤的⼀句:UInt uiCurrSplitFlag = ( pcCU->getDepth( uiAbsPartIdx ) > uiDepth ) ? 1 : 0;也就是说,通过判断pcCU->getDepth( uiAbsPartIdx )是否⼤于uiDepth来确定当前CU是否还要继续分割,后者我们知道,是当前CU的深度,那么前者呢?⾃然就是在xCompressCU中确定下来的当前CU的最佳分割模式。

针对HEVC编码单元的二分深度划分算法

针对HEVC编码单元的二分深度划分算法

针对HEVC编码单元的二分深度划分算法曹腾飞;史媛媛【摘要】为了降低高效率视频编码(HEVC)的编码单元(CU)进行四叉树递归遍历的时间,提出一种改进的编码单元快速划分算法.首先,利用帧间时间域的相关性,提取前一帧相同位置CU的最优划分结构,以预测当前CU的划分深度;然后通过改进编码CU结构划分遍历的算法,减少CTU (Coding Tree Unit)四叉树结构的遍历,即从二分深度开始遍历,在每一步遍历之前,判断是否提前终止遍历.实验表明,与HM15.0中的基准划分算法相比,本文算法能够在保证编码性能的同时,降低了55.4%的编码时间,提高了HEVC的编码效率.【期刊名称】《计算机系统应用》【年(卷),期】2018(027)007【总页数】6页(P167-172)【关键词】高效率视频编码(HEVC);编码单元(CU)结构;四叉树;二分深度划分【作者】曹腾飞;史媛媛【作者单位】青海大学计算机技术与应用系,西宁810016;西安电子科技大学计算机学院,西安710071【正文语种】中文随着智能终端的普及使得视频应用越来越多样化,涉及的视频内容丰富多样, 人们对视频分辨率的要求也随之水涨船高[1], 对视频编码压缩和传输提出更高的要求.2013年JCT-VC (Joint Collaborative Team on Video Coding)正式发布了新一代高效视频编码标准(High Efficiency Video Coding, HEVC)标准[2], 又称H.265, 由于新标准HEVC采用了灵活的四叉树自适应存储结构、35种的帧内预测(包括planner 模式和 DC模式)模式、包括运动信息融合技术(Merge)以及基于Merge 的Skip模式的帧间预测模式、自适应环路滤波等新技术[3], 这些技术的改进使得HEVC比H.264节省了50%左右的编码率[4]. 这些新的编码技术在提高编码效率的同时, 也增加了计算复杂度, 特别是编码器为获得最佳CU四叉树划分所采用的全深度搜索方法需要大量的计算时间, 这极大地提高了HEVC编码器的复杂度.目前已经有一些研究学者针对CU四叉树划分计算复杂度高的问题提出众多优化算法来降低计算复杂度. 文献[5]提出了自适应 CU 深度遍历的算法, 利用空域相关性来预测当前编码块的深度值, 而大部分的编码块还是要遍历3个CU以上的深度, 所以该方法节省的时间相当有限. 文献[6]通过比较时空相邻 CU 的深度, 来判断是否可以跳过当前深度 CU 的预测编码.文献[7]采用时域空域相结合的预测方式, 通过相邻编码块深度值加权方式来预测当前编码块的深度值. 虽然这种方法在很大程度上减少了遍历范围, 但没有考虑到不同视频序列之间的差异性. 文献[8]根据空域相关性自适应地决定当前 CU 深度的最大值和最小值.显然, 这种仅根据空域的相关性的方法降低的复杂度是有限的, 且其预测深度值的准确度也不高. 文献[9–11]通过提前终止四叉树划分的遍历对CU结构进行快速决策, 依据当前待编码CU的率失真代价(RDcost)与阈值Thre间的大小, 判断是否要进行下一步的递归划分, 但当图像细节较多时提高的效率不高.文献[12]在帧内预测前利用相邻已编码CU的深度对当前CU的深度进行预判断; 在帧内预测时利用当前CU的率失真代价与预先定义的阈值进行对比判断, 跳过一些不合适所属区域内容的编码单元尺寸类型的编码过程. 文献[13]提出一种基于时空相关性的编码单元深度决策算法. 融合关联帧编码单元的深度信息及当前帧相邻编码单元的深度信息, 从而预测当前编码单元的深度范围.针对以上问题, 本文综合考虑了编码效率和时间损耗, 通过优化CU四叉树结构划分的遍历过程, 提出了一种CU结构快速划分算法. 通过考虑时空域的相关性和二分法的高效性对编码单元深度决策算进行改进法, 使得优化后的结果在基本不改变图像的质量与输出码率的情况下, 提高了编码时间.1 HEVC的CU深度决策为了灵活有效的表示视频编码内容, HEVC为图像的划分定义了一套全新的分割模式, 包括编码单元(Coding Unit, CU)、预测单元 (Prediction Unit, PU)和变换单元(Transform Unit, TU)3个概念描述整个编码过程.其中CU类似于 H.264 中的宏块或子宏块, 每个CU均为2N×2N的像素块, 是HEVC编码的基本单元, 目前可变范围为64×64至8×8. 图像首先以最大编码单元(LCU, 如64×64块)为单位进行编码, 在LCU内部按照四叉树结构进行子块划分, 直至成为最小编码单元(SCU, 如8×8块)为止, 对应的分割深度分别为 0、1、2 和 3, 如图1 所示. 在 HEVC 的编码过程中, 通过比较四叉树结构(图2)中本层CU 与下层 4个子CU的 RDcost 大小来决定 CU 是否需要划分, 进而决定 LCU 的最终划分方式, 如图2所示每一个CTU 的四叉树递归划分过程如下.步骤1. 首先进行CU的划分过程. 对于每个64×64深度Depth=0的最大编码单元LCU进行预测编码, 同时, 该CU还要进行各种PU预测和相应的模式选择, 最终根据公式(1)得到率失真代价.其中, SSE表示使用预测模式计算的残差平方和, λ表示拉格朗日系数; bits表示使用当前预测模式下进行编码的码率.步骤2. 将64×64的编码单元划分为4个32×32的子CU, 每个子 CU的尺寸大小是32×32, 深度Depth=1. 同理对每个子单元进行预测编码, 并计算各自的率失真代价值. 如此递归的划分下去, 直至划分到最小的编码单元8×8, 深度Depth=3时便不再划分.图1 四叉树结构步骤3. 从Depth为0的CU开始进行CU深度划分过程, 如果4个32×32大小的子CU的率失真代价之和大于于其对应的64×64大小的CU的率失真代价,则选择64×64 的 CU; 否则, 选择32×32 的 CU. 如此递归下去, 直到选到Depth为0的CU. 至此, 当前LCU的深度决策过程完成.图2 CU的深度分割示意图为了能够获取最佳的块划分方式, HEVC编码器使用全深度搜索方法. 在确定一个LCU的最终深度算法中, 需要对CU深度进行0~3次的全遍历,总共需要进行1+4+4×4+4×4×4=85次CU尺寸选择的率失真代价计算, 而每个CU还要进行各种PU预测和模式选择的率失真代价计算. 这使得编码器的复杂度过高, 无法满足编码器的实时应用, 因此降低编码器的复杂度, 提高编码速度具有非常重要的应用价值.2 算法介绍2.1 帧间时空域的相关性由于在视频序列是由连续的图像序列帧构成, 则相邻帧之间具有很强的关联性, 即空间相关性的存在.由于存在空间相关性, 因此HEVC编码帧与编码帧之间CU的分割方式也存在极强的关联性. 为了分析时空相邻CU划分深度的相关性, 首先通过5类官方的测试序列ClassA~E进行编码测试, 测试详情如表1所示. 时空相邻CU 划分深度的相关统计结果如表2所示[14].表1 通用测试序列的详细情况序列类型Traffic, PeopleOnStreet, Nebuta, SteamLocomotive Class A Kimono,ParkScene,Cactus,QTerrace,BasketballDerive Class B RaceHorse, BQMall, PartyScene,BasketballDrill Class C RaceHorse, BQSquare, BlowingBubbles, BasketballPass Class D Johnny, KristenAndSara, FourPeople Class E通过表2的数据分析可知, 当前CU的最优化分深度与前一帧相同位置的CU之间有较强的相关性,因此可以利用前一帧相同位置的CU的划分深度对当前LCU的最优四叉树结构进行预测.表2 时空相邻CU的相关性(单位: %)量化参数QP 左帧CU 右帧CU 前帧CU 22 51.3 50.1 80.5 27 45.4 39.8 75.6 32 61.2 48.5 81.2 37 49.8 37.1 79.2 42 50.6 50.7 78.92.2 二分深度划分决策算法在HEVC视频编码标准中, 当视频图像比较平坦,内容变化缓慢时, 针对它的编码单元大多选择了大尺寸类型; 当视频图像比较复杂, 细节比较丰富时, 针对它的编码单元大多选择了小尺寸类型; 使用HEVC测试模型HM15.0下对5类官方的测试序列ClassA~E进行编码测试, 通过统计分析出不同量化参数QP下编码单元CU的分布情况, 见表3.表3 不同QP下编码单元CU的分布(单位: %)量化参数QP CU size 64×6432×32 16×16 8×8 22 9.51 24.85 39.76 25.88 27 10.34 32.37 41.53 15.76 32 12.6 17.21 45.44 24.75 37 15.04 31.3 36.1 17.56 42 16.26 22.09 32.04 29.61 从表3可以看出, 在测试序列中最小划分深度CU结构(64×64)仅仅占10%左右. 而在HM15.0中从划分深度最小的CU结构开始遍历, 那么大部分CU需要花费大量的时间效率进行层层遍历, 计算复杂度高.因此通过对CU四叉树结构的遍历过程进行优化, 提出了一种二分深度划分决策算法, 从二分深度Depth=2(即CU的大小为16×16)开始遍历, 并在每一步遍历之前, 判断是否提前终止遍历.2.3 二分深度划分算法整体算法描述如下:1) 利用前一帧相同位置的CU的划分深度对当前LCU的最优四叉树结构进行预测.2) 首先将二分深度CU(即CU的大小为16×16,Depth=2)的最佳预测模式作为最优RDcost模式, 以二分划分深度的CU作为最小CU结构.3) 将预测深度Depthpre和二分深度Depth进行比较(流程如图3所示). 若预测深度Depthpre ≥二分深度Depth, 则进行向下划分遍历, 跳转到4); 反之, 则进行向上合并遍历, 跳转到5).图3 二分深度划分算法流程4) 若预测深度Depthpre >=二分深度Depth, 则进行向下划分遍历(流程如图4所示). 首先以二分深度CU的最佳预测模式作为最优RDcost模式, 再对二分深度CU结构进行四叉树划分, 求解每一个子CU的最优RDcost, 并将最优RDcost与4个子CU的最小RDcost之和SRDcost进行比较: 若SRDcost较小, 则以4个子CU的最小RDcost之和作为最优RDcost, 以4个子CU作为最小CU结构; 若最优RDcost较小, 则停止当前CU子块的划分, 以当前CU作为当前CU子块的最优结构.5)若预测深度Depthpre <二分深度Depth, 则进行向上合并遍历(流程如图5所示). 对4个CU结构进行合并, 求解合并后CU的SRDcost, 并将其与最优RDcost图4 向下划分遍历算法流程进行比较: 若合并后CU的SRDcost较小, 则以合并后CU为最小CU结构, 并继续对周围相邻的4个CU进行合并; 若最优RDcost较小, 则停止当前CU子块的划分, 以当前CU作为最优结构.图5 向上合并遍历算法流程3 实验结果与分析为了验证本文所提出算法的效率, 以HEVC的软件测试模型HM15.0为参照进行实验. 实验平台为Inter(R)酷睿双核CPU, 主频2.60 GHz, 内存4.00 GB,操作系统Windows7. 在此实验平台上采用11个通用序列分别进行验证, 测试QP(量化参数)为: 22, 27, 32,37和42, 每个序列各测试100帧. 本文主要从所提出算法的编码效率以及付出的相应代价来考虑算法的性能.通过比较编码效率的参数指标有峰值信噪比增量(ΔPSNR)、码率增量(ΔBitrate)和编码时间增量(ΔTime):ΔPSNR是指本文所提快速算法与HM15.0算法的视频峰值信噪比之差, 即:ΔBitrate是指本文所提快速算法视频的平均码率与HM15.0算法视频的平均码率之差, 即:ΔTime是指本文所提快速算法视频的平均编码时间与HM15.0算法视频的平均编码时间之差, 即:从表4可以看出, 本文所提出的快速算法与HM15.0相比, 编码时间平均缩短了55.4%, 而编码码率平均仅增加了0.56%, 视频峰值信噪比仅降低了0.019%, 主观图像质量基本没有变化. 由此可见,HEVC编码CU快速划分算法在保证视频质量的前提下, 缩短了编码时间, 提高了编码效率.表4 编码性能的对比类别序列名称ΔPSNR(dB) ΔBitrate(%) ΔTime(%)Class A Traffic –0.015 0.74 –56.56 PeopleOnStreet –0.009 0.61 –53.62 Class B BasketballDerive –0.021 0.50 –49.19 ParkScene –0.034 0.46 –64.69 Class C RaceHorse –0.018 0.51 –62.47 PartyScene –0.016 0.57 –57.84 Class D RaceHorse –0.027 0.64 –56.64 BasketballPass –0.014 0.49 –48.57 Class E FourPeople –0.019 0.60 –50.77 KristenAndSara –0.017 0.48 –53.75平均–0.019 0.56 –55.41图6显示了本文提出的快速算法与HM15.0编码算的性能对比情况. 文中算法与HM15.0的率失真代价(RD)曲线基本重合, 表明本文算法与HM15.0的编码性能相比没有明显差异, 由此证明本文所提算法的有效性.表5是本文快速算法以其他几种典型算法的编码性能对比. 根据表5 的对比, 不论是编码质量、码率,还是编码时间, 本文提出的CU快速划分算法性能要比其他2种更优, 这得益于本文提出的二分深度划分技术, 降低了当前CU划分的时间消耗. 图6 RaceHorse的率失真代价(RD)曲线图表5 本文快速算法与各典型算法的性能比较各快速算法ΔPSNR(dB) ΔBitrate(%) ΔTime(%)文献[11] 0.020 0.62 –21.8文献[5] –0.029 1.41 –38.04本文算法–0.019 0.56 –55.414 结论为降低算法遍历 HEVC编码单元CU结构的计算复杂度, 本文提出一种改进的HEVC编码单元CU快速划分算法. 本文算法首先预测当前CU的最优深度,再通过二分深度划分算法进行深度遍历, 选出最优深度. 实验结果表明, 与HM15.0相比, 在编码质量几乎没有变化的情况下, 编码时间减少55.4%,能有效降低HEVC的编码复杂性.参考文献【相关文献】1Shen LQ, Zhang ZY, An P. Fast CU size decision and mode decision algorithm for HEVC intra coding. IEEE Transactions on Consumer Electronics, 2013, 59(1):207–213. [doi:10.1109/TCE.2013.6490261]2Sullivan GJ, Ohm JR, Han WJ, et al. Overview of the High Efficiency Video Coding (HEVC) standard. IEEE Transactions on Circuits and Systems for Video Technology, 2012,22(12):1649–1668. [doi: 10.1109/TCSVT.2012.2221191]3朱秀昌, 李欣, 陈杰. 新一代视频编码标准——HEVC. 南京邮电大学学报(自然科学版), 2013,33(3): 1–11.4黄胜, 付雄, 郝言明, 等. 一种HEVC帧内快速编码算法.光电子·激光, 2017, 28(3): 296–303.5Li X, An JC, Guo X, et al. Adaptive CU depth range.Proceedings of the 5th JCT-VC Meeting. Geneva,Switzerland. 2011.6李维, 张和仙, 杨付正. 高效率视频编码快速模式判决算法. 西安交通大学学报, 2013, 47(8): 104–109. [doi:10.7652/xjtuxb201308018]7Cao XR, Lai CC, Wang YF, et al. Short distance intra coding scheme for HEVC. Proceedings of 2012 Picture Coding Symposium. Krakow, Poland. 2012. 501–504.8Mu FS, Song L, Yang XK, et al. Fast coding unit depth decision for HEVC. Proceedings of 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW).Chengdu, China. 2014. 1–6.9Kim J, Choe Y, Kim YG. Fast coding unit size decision algorithm for intra coding in HEVC. Proceedings of 2013 IEEE International Conference on Consumer Electronics(ICCE). Las Vegas, NV, USA. 2013. 637–638.10齐美彬, 陈秀丽, 杨艳芳, 等. 高效率视频编码帧内预测编码单元划分快速算法. 电子与信息学报, 2014, 36(7):1699–1705.11白彩霞, 袁春. HEVC子划分快速决策算法. 第八届和谐人机环境联合学术会议. 广州, 中国. 2012. 23–28.12姚晓敏, 王万良, 岑跃峰, 等. 一种面向HEVC的编码单元深度决策算法. 计算机工程, 2015, 41(1): 240–244.13蒋刚毅, 杨小祥, 彭宗举, 等. 高效视频编码的快速编码单元深度遍历选择和早期编码单元裁剪. 光学精密工程,2014, 22(5): 1322–1330.14李强, 覃杨微. 一种基于空时域相关性的HEVC帧间预测模式快速决策算法. 重庆邮电大学学报(自然科学版),2016, 28(1): 9–16. [doi: 10.3979/j.issn.1673-825X.2016.01.002]。

3D-HEVC深度视频快速帧内编码算法

3D-HEVC深度视频快速帧内编码算法

3D-HEVC深度视频快速帧内编码算法韩慧敏;彭宗举;蒋刚毅;陈芬【期刊名称】《光电工程》【年(卷),期】2015(42)8【摘要】多视点彩色加深度视频(MVD)是目前3D场景的主流表示方式之一。

在3D-HEVC的编码框架中,深度视频帧内编码具有较高的编码复杂度。

本文提出了一种基于区域分割的3D-HEVC 深度视频快速帧内编码算法。

首先根据对应彩色视频的纹理特征和深度视频的边界提取结果,把深度图分为四个区域;然后统计分析了各区域中深度视频编码失真对绘制中间视点质量的影响以及帧内编码预测模式的分布规律;最后设计了在不同区域里采用不同的CU尺寸提前决定、模式粗选限定和快速DMMs模式决定的算法。

实验结果表明,本文提出的算法在绘制的中间视点质量不变的情况下平均节省55.11%的总体编码时间,对于深度视频的编码时间平均节省61.57%。

%Multi-view video plus depth (MVD) is one of the main representations of 3D scene. Under the framework 3D-HEVC, the depth video intra coding has high computational complexity. A region segmentation-based fast CU size decision and mode decision algorithm for 3D-HEVC depth video intra coding is proposed. Firstly, the depth map is divided into four regions based on the edge extraction of depth video and texture detection of the corresponding color video texture. Then, the effects of depth video distortion on rendered view quality and mode distribution of intra prediction coding are statistically analyzed in each region. Finally, different CU size, rough prediction mode and DMMsdecision are designed for encoding different regions. Experimental results show that the proposed algorithm save the encoding time of MVD by 55.1% on average, and the encoding time of depth video by 61.57% on average with negligible rendered virtual view image degradation.【总页数】7页(P47-53)【作者】韩慧敏;彭宗举;蒋刚毅;陈芬【作者单位】宁波大学信息科学与工程学院,浙江宁波 315211;宁波大学信息科学与工程学院,浙江宁波 315211;宁波大学信息科学与工程学院,浙江宁波315211;宁波大学信息科学与工程学院,浙江宁波 315211【正文语种】中文【中图分类】TN919.8【相关文献】1.3D-HEVC深度图像帧内编码单元划分快速算法 [J], 张洪彬;伏长虹;苏卫民;陈锐霖;萧允治2.基于多类支持向量机的3D-HEVC深度视频帧内编码快速算法 [J], 刘晟;彭宗举;陈嘉丽;陈芬;郁梅;蒋刚毅3.3D-HEVC深度图帧内编码快速算法 [J], 韩雪;冯桂;曹海燕4.采用灰度共生矩阵进行深度预判的3D-HEVC深度图帧内快速编码算法 [J], 廖洁;陈婧;曾焕强;蔡灿辉5.基于运动一致性的3D-HEVC深度视频帧间编码快速算法 [J], 刘晟;陈芬;金德富因版权原因,仅展示原文概要,查看原文内容请购买。

基于深度预测的HEVC编码单元快速划分算法

基于深度预测的HEVC编码单元快速划分算法

基于深度预测的HEVC编码单元快速划分算法赵宏;蒋雨晨;李靖波【期刊名称】《计算机应用与软件》【年(卷),期】2017(034)005【摘要】A rate-distortion optimization method with high computational complexity is used for CU (coding unit)mode decision in HEVC (High Efficiency Video Coding), which has high time complexity and needs a long encoding time.In order to decrease the coding complexity of HVEC and accelerate the coding speed, a CU fast partition algorithm based on depth prediction is proposed.First, according to the depth correlation between the current CU and the surrounding CU and the current position CU in the reference frame, the optimal depth of the current CU is predicted.Then, the adjacent splitting method or partition decisions based on the relationship between the current depth and the predict depth of the current CU are used to split recursively the current CU.In order to reduce the misjudgment caused by the prediction, in the CU depth level by the two to three levels of division, we use rate distortion or percentage of the way to determine again.Experimental results show that compared with the original HEVC coding method, the algorithm reduced the coding time by 37.7% when the peak signal-to-noise ratio is reduced by 0.07 dB and the coding bit rate is increased by 0.88%, which has high time efficiency.%高效视频编码HEVC(High Efficiency Video Coding)采用计算复杂度较高的率失真优化方法对编码单元CU(Coding Unit)划分进行判决,具有较高的时间复杂度,编码所需时间较长.为降低HEVC编码复杂度,加快编码速度,提出一种基于深度预测的CU快速划分算法.首先依据当前CU与周围相邻CU和参考帧中当前位置CU的深度相关性,预测当前CU的最优深度,然后使用相邻相关分割法或依据当前CU深度和预测深度的关系对当前CU进行递归划分.为减少预测带来的误判,在CU深度级别由2级到3级的划分中,使用率失真或百分比的方式进行二次判定.实验结果表明,该算法与原始的HEVC编码方法相比,在亮度峰值信噪比减小0.07 dB,编码比特率增加0.88%的情况下,整体编码单元划分时间缩短37.7%,具有较高的时间效率.【总页数】5页(P229-233)【作者】赵宏;蒋雨晨;李靖波【作者单位】兰州理工大学计算机与通信学院甘肃兰州 730050;兰州理工大学计算机与通信学院甘肃兰州 730050;兰州理工大学计算机与通信学院甘肃兰州730050【正文语种】中文【中图分类】TP3【相关文献】1.纹理图的3D-HEVC深度图编码单元快速划分算法 [J], 谢红;魏丽莎;解武2.快速HEVC帧间预测编码单元深度选择算法 [J], 邵庆龙;杨智尧;郭树旭3.HEVC帧间预测编码单元深度快速选择算法 [J], 张盛峰;汪仟;黄胜;肖傲4.基于时空相关与纹理特性的HEVC编码单元快速划分算法 [J], 汤进;彭勇5.基于梯度结构相似度的HEVC帧内编码单元快速划分算法 [J], 敬文慧;何小海;卿粼波;李向群因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

第 11 期
司丽娜等 . 一种提前终止 CU 划分和模式选择的 HEVC 快速算法
151
与 H.264/AVC 相 比 , 新一代视频编码标准 HEVC 的编码结构仍属于基于块的混合视频编码 框架, 引入了许多新技术包括递归的树结构单元 及大块变换[4]等。块结构在 H.264/AVC 的基础上 进行了扩展, 包括三部分即编码单元 (coding unit, CU) , 预测单元 (prediction unit, PU) 和变换单 元 (transform unit, TU) 。这种分离的块结构对每 个单元的优化是非常有利的。在 HEVC 中, 一个 框架首先被分成许多最大编码单元 LCU (large coding unit, LCU) , 它的尺寸在 HM 中默认为 64ˑ 64, 此时 LCU 深度为 0。这些 LCU 按照光栅扫描 的顺序依次对其编码, 并递归地分裂成四个编码 单元 CUs 并且深度加 1, 最终形成一个完整的四 叉树结构, 如图 1 所示。对每一个编码单元 CU 来 说, HM 都将用全率失真 (full RD) 代价算法对它 所对应的所有帧内模式进行计算。然后选出最 佳模式, 确定每个 LCU 的最佳 CU 划分。当定义 了最大编码单元 LCU 的尺寸和 CU 的分层深度 时, 一个完整的四叉树结构也就确定了。预测单 元 PU 是在编码单元 CU 的基础上进行划分的, 仅 用在四叉树结构的叶节点处的子 CU。在 HEVC 中利用预测单元 PU 进行预测共分为三种类型: 帧内预测、 帧间预测和跳过模式, 对于帧内预测 来说, PU 的划分支持两种尺寸, 即 2Nˑ2N 和 NˑN (其中 N 的取值为 4, 8, 16 或 32) 。为了进一步提 高压缩效率, HEVC 对变换单元 TU 也采用了递归 划分, 其中变换单元 TU 是用于变换和量化的基 本单元。
第 33 卷 第 11 期 2017 年 11 月
科 技 通 报
BULLETIN OF SCIENCE AND TECHNOLOGY
Vol.33 No.11 Nov. 2017
一种提前终止 CU 划分和模式选择的 HEVC 快速算法
(1. 郑州轻工业学院 计算机与通信工程学院, 郑州 450002; 2. 郑州轻工业学院 软件学院, 郑州 450002)
司丽娜 1, 江 楠 算复杂度。为减少编码复杂度, 需要对 HEVC 的快速算法进行研 究。本文提出一种提前终止编码单元 (coding unit, CU) 划分和模式选择的快速算法, 以降低 HEVC 帧内 省约 36%的编码时间。 编码的计算复杂度。实验结果表明, 本文所提出的快速算法在编码性能几乎不变的前提下, 平均可节 关键词: HEVC; 模式选择; 帧内预测 中图分类号: TP301.6 文献标识码: A 文章编号: 1001-7119 (2017) 11-0150-05
要: 与 H.264/AVC 相比, 新一代高效视频编码标准 (high efficiency video coding, HEVC) 有效地改善
DOI:10.13774/ki.kjtb.2017.11.035
HEVC Fast Algorithm with Early Termination CU Split and Mode Decision
近几年来, 在许多消费领域, 数字视频尤其 是数字高清视频已经成为多媒体行业的主导格 式。视频编码技术的出现, 大大促进了数字视频 的传输。为了提高高清视频的编码效率, 2010 年 1 月, ITU-T 和 ISO/IEC 联合成立了 JCT-VC (Joint Collaborative Team on Video Coding) 联合组织, 一 [1-3] 起合作制定出了下一代视频编码标准 HEVC 。 新一代高效视频编码标准 HEVC 通过采用四 叉树编码结构和大块预测及变换等多种先进的
1 HEVC 帧内预测
收稿日期: 2016-12-02 基金项目: 国家自然科学基金 (61302118) ; 2016 年度河南省高等学校重点科研项目 (16A520030) 。 作者简介: 司丽娜 (1982-) , 女, 河南省新密市人, 研究生, 讲师, 研究方向: 多媒体通信。 *通讯作者: 江楠 (1981-) , 男, 河南郑州人, 研究生, 讲师, 研究方向: 计算机应用, 数据库。
Si Lina1 , Jiang Nan1* , Zhang Zhifeng2
2. Software Engineering College , Zhengzhou University of Light Industry , Zhengzhou 450002 , China) Abstract: Compared with H.264/AVC, HEVC (High Efficiency Video Coding) as the new generation video coding standard can effectively improve the compression performance at the expense of increasing the computational complexity. In order to reduce the coding complexity, it is very necessary to research the Unit) and mode selection decision to reduce the calculation complexity of HEVC intra coding algorithm. time in the premise of keeping the coding performance under almost the same. Keywords: HEVC ; mode decision ; intra prediction fast algorithm of HEVC. So this paper proposes a fast algorithm with early termination of CU (Coding The experimental results show that the proposed algorithm can averagely save about 36% of the encoding (1.School of Computer and Communication Engineering , Zhengzhou 450002 , China;
编码技术, 使得 HEVC 在编码性能获得比 H.264/ AVC 高一倍的同时, 也急剧增加了编码过程的计 算复杂度, 在实际应用中, 尤其是在实时系统的 应用中, HEVC 的应用及发展必将因此受到阻 碍。所以, 要想大力推进和普及 HEVC 的应用, 如 何在不减弱其性能的前提下有效地降低编码过 程的计算复杂度是目前急需解决的工作。
相关文档
最新文档