外文翻译基于ezw算法的图像压缩研究与实现(适用于毕业论文外文翻译+中英文对照)

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

外文翻译
毕业设计题目:基于EZW算法的图像压缩研究与实现
原文1:A New Method of Robust Image Compression Based on the Embedded Zerotree Wavelet
Algorithm(一)
译文1:一种基于嵌入式零树小波算法的鲁棒图像压缩新方法(一)
原文2: A New Method of Robust Image Compression Based on the Embedded Zerotree Wavelet
Algorithm(二)
译文2:一种基于嵌入式零树小波算法的鲁棒图像压缩新方法(二)
A New Method of Robust Image Compression Based on
the Embedded Zero tree Wavelet Algorithm(一)
Charles D. Creusere
Abstract—We propose here a wavelet-based image compression algorithm that achieves robustness to transmission errors by partitioning the transform coefficients into groups and independently processing each group using an embedded coder. Thus, a bit error in one group does not affect the others, allowing more uncorrupted information to reach the decoder.
Index Terms—Coefficient partitioning, embedded bitstream, error resilience,image compression, low complexity, wavelets.
I. INTRODUCTION
Recently, the proliferation of wireless services and the internet along with consumer demand for multimedia products has spurred interest in the transmission of image and video data over noisy communications channels whose capacities vary with time. In such applications, it can be advantageous to combine the source and channel coding (i.e., compression and error correction) processes from both a complexity and an information theory standpoint .In this work, we introduce a form of low-complexity joint source channel coding in which varying amounts of transmission error robustness can be built directly into an embedded bit stream. The approach taken here modifies Shapiro’s embedded zerotree wavelet (EZW) image compression algorithm , but the basic idea can be easily applied to other wavelet-based embedded coders such
This paper is organized as follows. In Section II, we discuss the conventional EZW image compression algorithm and its resistance to transmission errors. Next, Section III develops our new, robust coder and explores the options associated with its implementation. In Section IV, we analyze the performance of the robust algorithm in the presence of channel errors, and we use the results of this analysis to perform comparisons in Section V. Finally, implementation and complexity issues are discussed in Section VI, followed by conclusions in Section VII.
II. EZW IMAGE COMPRESSION
After performing a wavelet transform on the input image, the EZW encoder progressively quantizes the coefficients using a form of bit plane coding to create an embedded representation of the image—i.e.,a representation in which a high resolution image also contains all coarser resolutions. This
bit plane coding is accomplished by comparing the magnitudes of the wavelet coefficients to a threshold T to determine which of them are significant: if the magnitude is greater than T, that coefficient is significant. As the scanning progresses from low to high spatial frequencies, a 2-b symbol is used to encode the sign and position of all significant coefficients. This symbol can be a + or - indicating the sign of the significant coefficient; a “0” indicating that the coefficient is insignificant; or a zerotree root (ZTR) indicating that the coefficient is insignificant along with all of the finer resolution coefficients corresponding to the same spatial region. The inclusion of the ZTR symbol greatly increases the coding efficiency because it allows the encoder to exploit interscale correlations that have been observed in most images . After computing the “significance map” symbols for a given bit plane, resolution enhancement bits must be transmitted for all significant coefficients; in our implementation, we concatenate two of these to form a symbol. Prior to transmission, the significance and resolution enhancement symbols are arithmetically encoded using the simple adaptive model described in with a four symbol alphabet (plus one stop symbol). The threshold T is then divided by two, and the scanning process is repeated until some rate or distortion target is met. At this point, the stop symbol is transmitted. The decoder, on the other hand, simply accepts the bitstream coming from the encoder, arithmetically decodes it, and progressively builds up the significance map and enhancement list in the exact same way as they were created by the encoder.
The embedded nature of the bitstream produced by this encoder provides a certain degree of error protection. Specifically, all of the information which arrives before the first bit error occurs can be used to reconstruct the image; everything that arrives after is lost. This is in direct contrast to many compression algorithms where a single error can irreparably damage the image. Furthermore, we have found that the EZW algorithm can actually detect an error when its arithmetic decoder terminates (by decoding a stop symbol) before reaching its target rate or distortion. It is easy to see why this must happen. Consider that the encoder and decoder use the same backward adaptive model to calculate the probabilities of the five possible symbols (four data symbols plus the stop symbol) and that these probabilities directly define the codewords. Not surprisingly, the length of a symbol’s codeword is inversely proportional to its probability. If a completely random bit sequence is fed into the arithmetic decoder, then the probability of decoding any symbol is completely determined by the initial state of the adaptive model—i.e., the probability weighting defined by the model is not, on the average, changed by a random input.
In our implementation of the Witten et al. arithmetic coder, we set Max_frequency equal to 500 and maintain the stop symbol probability at 1/cum_freq. Because cum_freq (the sum of the frequency counts of all symbols) is divided by two whenever it exceeds Max_frequency, the probability of decoding a stop symbol stays mostly between 1/250 and 1/500. Thus, if a random bitstream is fed into the decoder after training it to this point, an average of 250 to 500 symbols will be processed before the stop symbol is decoded. The bitstream is correctly interpreted as long as the decoder is synchronized with the encoder, but this synchronization is lost shortly after the first error occurs. Once this happens, the incoming bitstream looks random to the decoder (the more efficient the encoder, the more random it will appear). Since each symbol is represented in the compressed image by between one and two bits, the decoder should self-terminate between 31 and 125 bytes after an error occurs. Experimentally, we have found that the arithmetic decoder overrun is typically between 30 and 50 bytes, which is consistent with the theoretical range, since most of these terminations took place while decoding the highly compressed significance map. If the overrun is small compared to the number of bits correctly decoded, it does not significantly affect the quality of the reconstructed image. While some erroneous information is incorporated into the wavelet coefficients, the bit plane scanning structure ensures that it is widely dispersed spatially, making it visually insignificant in the image.
Charles D. Creusere
国籍: 美国
出处:图像处理电机及电子学工程师联合会 1997年第10期1436-1442页
ISSN1057-7149
一种基于嵌入式零树小波算法的鲁棒图像压缩新方法(一)
Charles D. Creusere
摘要----本文提出了一种基于小波变换的图像压缩算法,通过分割由系数转变成的群组和独立使用内嵌编码器处理每个分组,实现了传输错误的鲁棒性。

这样,一个分组的误码不会影响到其他的分组,允许更多未损坏的信息到达解码器标引词--系数分割,内嵌码流,错误恢复,图像压缩,低复杂性,小波。

一、引言
最近,受到无线服务和互联网的增殖、消费者对多媒体产品的需求的影响,人们对图像和视频数据在含噪通信信道的传输的兴趣日益加深。

基于这种应用环境,无论从复杂性还是信息理论的角度,合并信源和信道编码(即压缩和纠错)流程都是十分有利的。

在这项工作中,我们采用了一种低复杂度、信源信道联合编码的形式,不同数量的传播错误的鲁棒性可直接内建在一个内嵌比特流中。

在这里采用的方法修改了Shapiro的嵌入式零树小波(零树)图像压缩算法,但其基本思想可以很容易地适用于其他基于小波变换的嵌入式编码,例如Said and Pearlman 的基于小波变换的嵌入式编码和Taubman and Zakho的基于小波变换的嵌入式编码。

使用这种方法已经得到了一些初步成果。

本文组织如下。

在第二节,我们将讨论传统的EZW图像压缩算法及其对传输误差的影响。

接下来,我们在第三节开发新的鲁棒性编码器并且探索相关实施编码器的方案。

在第四节,我们分析在存在通道错误的情况下鲁棒EZW算法的性能,并且我们在第五节使用这种分析的结果充当对比。

最后,在第六节中讨论执行和复杂性问题,在第七节记录随后的结论。

二、 EZW图像压缩
一幅输入的图像经过小波变换后,EZW编码器利用位平面编码创造一个图像的嵌入式描绘的形式来逐步量化系数,也就是说,高分辨率的描绘还包含所有粗分辨率。

此位平面编码是通过比较小波系数和阈值T的大小来确定哪些系数是重要的:如果小波系数比T大,则这个小波系数是重要的。

由于扫描进程是从低空间频率到
高空间频率,一个2bit的字符被用来编码所有重要系数的标记和位置。

这个字符可以是用+或-来描述重要系数的标记;一个“0”表明该系数是微不足道的;一个零树的根(ZTR)表明该系数与在同一空间区域中所有高分辨度相应的系数一样是微不足道的。

ZTR字符的引入大大提高了编码效率,因为它允许编码器利用已在大多数图像中被观察到的层间之间的相关性。

在计算操作给定位平面的“重要性图表”字符后,分辨率增强位必须被传输到所有重要的系数,在我们的实际操作中,我们连接两个系数来形成字符。

在传输之前,重要性图表字符和分辨率增强字符使用简单的自适应模型描述,以四个字母符号来进行算术编码(增加一个停止字符)。

然后阈值T除以2,并且反复进行扫描过程直至满足某一码率或失真率目标。

此时,传输停止符号。

另一方面,解码器仅仅只接收来自编码器的比特流,对其进行解码,并逐步用和编码器建立重要性图表的列出方式完全相同的列出方式建立重要性图表和增强字符。

编码器产生的比特流的内嵌性质提供一定程度上的误差防护。

具体而言,第一个位错误到达之前的所有信息可以用来重建图像;一切在丢失后都可以到达。

这是许多一个错误就可能对图像造成不可弥补的损害的压缩算法所不能相比的。

此外,我们已经发现,EZW算法实际上在实现其目标码率或失真之前,当其解码器终止(通过解码停止符号)的时候可以检测到存在的错误。

我们很容易理解为什么这项算法功能必须存在。

我们考虑编码器和解码器使用相同的落后的自适应模型来计算5个可能字符(4数据字符加上停止字符)的概率,并且这些概率直接确定码字。

毫不奇怪,一个字符的码字长度和其概率是成反比的。

如果将一个完全随机位序列送入解码器,那么任何字符的解码概率完全取决于自适应模型的初始状态,也就是说,一般来说,一个随机的输入并不改变概率权重的模型定义。

在我们使用Wittenetal编码器时,我们设置Max_frequency为500且保持停止字符概率1/cum_freq 。

由于cum_freq(所有的字符的频率的总和)每逢超过Max_frequency都要除以2,译码停止字符的概率主要是在1/250~1/500之间。

因此,如果在到达这一点后将一个随机码流是送入解码器,在编码停止字符之前平均250到500个字符会被处理。

比特流可以正确理解为编码器和解码器的同步,但这种同步在第一个错误发生后不久将会失去。

一旦发生这种情况,比特流将被看做随机输入送入解码器(越是有效率的编码器,越会出现随机输入)。

因为每个符号在压缩图像中代表1至2位,编码器错误发生后的31至125字节时会自行终止。


过实验,我们发现,解码器溢出通常是在30至50字节时,它与理论范围是一致的,因为这些终止会发生在解码高度压缩的重要图的时侯。

如果溢出和正确解码的位数相比是比较小的,它并不会显着影响的重建图像的质量。

虽然一些错误的信息会被纳入小波系数中,位平面扫描结构会确保它被广泛地分散在空间中,使其在图像中毫无意义。

A New Method of Robust Image Compression Based on
the Embedded Zerotree Wavelet Algorithm(二)
Charles D. Creusere
III. ROBUST EZW (REZW) ALGORITHM
The basic idea of the REZW image compression algorithm is to divide the wavelet coefficients up into S groups and then to quantize and code each of them independently so that S different embedded bitstreams are created. These bitstreams are then interleaved as appropriate (e.g., bits, bytes, packets, etc.) prior to transmission so that the embedded nature of the composite bitstream is maintained. In the remainder of this paper we assume that individual bits are interleaved. For the REZW approach to be effective, each group of wavelet coefficients must be of equal size and must uniformly span the image.
A similar method has been proposed in to parallelize the EZW algorithm, but that method instead groups the coefficients so that data transmission between processors is minimized.
What do we gain by using this new algorithm over the conventional one? As has been pointed out in Section II, the EZW decoder can use all of the bits received before the occurrence of the first error to reconstruct the image. By coding the wavelet coefficients with multiple, independent (and interleaved) bit streams, a single bit error truncates only one of the streams—the others are still completely received. Consequently, the wavelet coefficients represented by the truncated stream are reconstructed at reduced resolution while those represented by the other streams are reconstructed at the full encoder resolution. If the set of coefficients in each stream spans the entire image, then the inverse wavelet transform in the decoder evenly blends the different resolutions so that the resulting image has a spatially consistent quality.
IV. STOCHASTIC ANALYSIS
To evaluate the effectiveness of this family of robust compression algorithms, we assume that the coded image is transmitted through a binary symmetric, memoryless channel with a probability of bit error given byε. We would like to know the number of bits correctly received in each of the S streams. Since this quantity is itself a random variable, we use its mean value to characterize the performance of the different algorithms. Because the channel is memoryless, streams terminate independently of each other, but the mean values of their termination points are always the same for a specifiedε. Assuming that the image is compressed to B total bits and that S streams are used, then the probability of receiving
k of the B/S bits in each stream correctly is given by
⎪⎩⎪⎨⎧=-<≤-⋅=B S k B S k k p k
k )1(0)1()(εεε (2)
which is a valid probability mass function as one can easily verify by summing over all k: In (2), k )1(ε-is the probability that the first k bits are correct while εis the probability that the (k + 1)th bit is in error. Note that a separate term conditioned on B/S is necessary to take into account the possibility that all of the bits in the stream are correctly received. The mean value can now be calculated as
∑=⋅=B S
k s k p k m 0)
( (3)
On the average, the total number of bits correctly received is s m S ⋅. If B/S is large relative to1/ε, then 1m m s ≈. Generally, the gain actually achieved is not this high, but it is nonetheless significant. In Section V, we use (3) to analyze the impact of transmission errors on the average quality of the reconstructed image for all possible values of S.
Charles D. Creusere
国籍: 美国
出处:图像处理电机及电子学工程师联合会 1997年第10期1436-1442页
ISSN1057-7149
一种基于嵌入式零树小波算法的鲁棒图像压缩新方法(二)
Charles D. Creusere
三、鲁棒嵌入式零树小波(REZW)算法
该REZW图像压缩算法的基本思想是将小波系数分成S组然后每组逐个独立的进行量化和编码,于是便创建了S组不同的内嵌比特流。

然后在传输之前将这些码流适当交叉(如,位,字节,包等),使混合比特流的内嵌性质得以维持。

在本文的其余部分,我们假设个别位交错。

为了使REZW算法有效,每个小波系数组必须是同样大小并且均一地跨越图像。

类似的方法已被建议使用来并行EZW算法,若非这种使用分组代替系数的算法,处理器之间的数据传输也不会最小化。

我们使用这种新算法代替传统的算法究竟会有什么增益?正如在第二部分所指出的那样,EZW解码器可以利用在第一个错误出现前接收到的所有位来重建图像。

通过将小波系数编码成多个独立(交错)的比特流,一个单一的位错误只截断比特流中的一个,其余的比特流仍可以被完整地接收到。

因此,被截断的比特流的小波系数的描绘重建为不完整的分辨率而那些其他比特流的描绘都重建为完整地编码器分辨率。

如果每个比特流的一组系数都跨越整个图像,那么在解码器中的逆小波变换必须均匀地混合不同的分辨率,这样产生的图象在空间上才会拥有符合标准的质量。

四、随机分析
为了评估这一系列鲁棒压缩算法的效率,我们假设利用一种二元对称、无记忆并且伴随着可能由ε引起的比特错误的信道来传输编码图像。

我们希望知道S组比特流中每组准确地接收到的比特数。

由于这个数值本身是一个随机变量,我们使用其均值来描述不同算法的性能。

由于信道的无记忆性,比特流彼此独立地终止,但其终结点的平均值总是相同的指定值ε。

假设图像被压缩为总数为B的比特并且将其分成S组,那么每个数据流中数值B/S个比特的接收概率k可以通过公式
⎪⎩
⎪⎨⎧=-<≤-⋅=B S k B S k k p k
k )1(0)1()(εεε (2) 准确得到。

这是一个有效的概率函数,可以通过对所有k 值求和来验证其有效性。

在公式(2)中,k )1(ε-表示前k 个比特传输正确的概率,而ε表示第k+1个比特
传输错误的概率。

请注意,考虑到比特流中的所有位都是正确接收的可能性,提出一个单独的基于B/S 术语是必要的。

平均值现在可以通过公式
∑=⋅=B S
k s k p k m 0
)( (3)
来计算。

一般来说,正确收到的比特总数是s m S ⋅。

如果B/S 是远远大于1/ε,则1m m s ≈。

一般来说,实际所能达到的增益并没有那么高,但是这也是十分重要的。

在第五节中,我们使用公式(3)分析所有可能的S 值下,平均传输误差对重建图像质量的影响。

相关文档
最新文档