Quadtree-guided wavelet image coding

合集下载

多尺度特征融合的脊柱X线图像分割方法

多尺度特征融合的脊柱X线图像分割方法

脊柱侧凸是一种脊柱三维结构的畸形疾病,全球有1%~4%的青少年受到此疾病的影响[1]。

该疾病的诊断主要参考患者的脊柱侧凸角度,目前X线成像方式是诊断脊柱侧凸的首选,在X线图像中分割脊柱是后续测量、配准以及三维重建的基础。

近期出现了不少脊柱X线图像分割方法。

Anitha等人[2-3]提出了使用自定义的滤波器自动提取椎体终板以及自动获取轮廓的形态学算子的方法,但这些方法存在一定的观察者间的误差。

Sardjono等人[4]提出基于带电粒子模型的物理方法来提取脊柱轮廓,实现过程复杂且实用性不高。

叶伟等人[5]提出了一种基于模糊C均值聚类分割算法,该方法过程繁琐且实用性欠佳。

以上方法都只对椎体进行了分割,却无法实现对脊柱的整体轮廓分割。

深度学习在图像分割的领域有很多应用。

Long等人提出了全卷积网络[6](Full Convolutional Network,FCN),将卷积神经网络的最后一层全连接层替换为卷积层,得到特征图后再经过反卷积来获得像素级的分类结果。

通过对FCN结构改进,Ronneberger等人提出了一种编码-解码的网络结构U-Net[7]解决图像分割问题。

Wu等人提出了BoostNet[8]来对脊柱X线图像进行目标检测以及一个基于多视角的相关网络[9]来完成对脊柱框架的定位。

上述方法并未直接对脊柱图像进行分割,仅提取了关键点的特征并由定位的特征来获取脊柱的整体轮廓。

Fang等人[10]采用FCN对脊柱的CT切片图像进行分割并进行三维重建,但分割精度相对较低。

Horng等人[11]将脊柱X线图像进行切割后使用残差U-Net 来对单个椎骨进行分割,再合成完整的脊柱图像,从而导致分割过程过于繁琐。

Tan等人[12]和Grigorieva等人[13]采用U-Net来对脊柱X线图像进行分割并实现对Cobb角的测量或三维重建,但存在分割精度不高的问题。

以上研究方法虽然在一定程度上完成脊柱分割,但仍存在两个问题:(1)只涉及椎体的定位和计算脊柱侧凸角度,却没有对图像进行完整的脊柱分割。

threshold and

threshold and

Effective wavelet-based compression method with adaptive quantizationthreshold and zerotree codingArtur Przelaskowski, Marian Kazubek, Tomasz JamrógiewiczInstitute of Radioelectronics, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warszawa,PolandABSTRACTEfficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images. Keywords: wavelet transform, image compression, medical image archiving, adaptive quantization1. INTRODUCTIONLossy image compression techniques allow significantly diminish the length of original image representation at the cost of certain original data changes. At range of lower bit rates these changes are mostly observed as distortion but sometimes improved image quality is visible. Compression of the concrete image with its all important features preserving and the noise and all redundancy of original representation removing is do required. The choice of proper compression method depends on many factors, especially on statistical image characteristics (global and local) and application. Medical applications seem to be challenged because of restricted demands on image quality (in the meaning of diagnostic accuracy) preserving. Perfect reconstruction of very small structures which are often very important for diagnosis even at low bit rates is possible by increasing adaptability of the algorithm. Fitting data processing method to changeable data behaviour within an image and taking into account a priori data knowledge allow to achieve sufficient compression efficiency. Recent achievements clearly show that nowadays wavelet-based techniques can realise these ideas in the best way.Wavelet transform features are useful for better representation of the actual nonstationary signals and allow to use a priori and a posteriori data knowledge for diagnostically important image elements preserving. Wavelets are very efficient for image compression as entire transformation basis function set. This transformation gives similar level of data decorrelation in comparison to very popular discrete cosine transform and has additional very important features. It often provides a more natural basis set than the sinusoids of the Fourier analysis, enables widen set of solution to construct effective adaptive scalar or vector quantization in time-frequency domain and correlated entropy coding techniques, does not create blocking artefacts and is well suited for hardware implementation. Wavelet-based compression is naturally multiresolution and scalable in different applications so that a single decomposition provides reconstruction at a variety of sizes and resolutions (limited by compressed representation) and progressive coding and transmission in multiuser environments.Wavelet decomposition can be implemented in terms of filters and realised as subband coding approach. The fundamental issue in construction of efficient subband coding techniques is to select, design or modify the analysis and synthesis filters.1Wavelets are good tool to create wide class of new filters which occur very effective in compression schemes. The choice of suitable wavelet family, with such criteria as regularity, linearity, symmetry, orthogonality or impulse and step response of corresponding filter bank, can significantly improve compression efficiency. For compactly supported wavelets corresponding filter length is proportional to the degree of smoothness and regularity of the wavelet. Butwhen the wavelets are orthogonal (the greatest data decorrelation) they also have non-linear phase in the associated FIR filters. The symmetry, compact support and linear phase of filters may be achieved by biorthogonal wavelet bases application. Then quadrature mirror and perfect reconstruction subband filters are used to compute the wavelet transform. Biorthogonal wavelet-based filters occurred very efficient in compression algorithms. A construction of wavelet transformation by fitting local defined basis transformation function (or finite length filters) into image data characteristics is possible but very difficult. Because of nonstationary of image data, miscellaneous image futures which could be important for good reconstruction, significant various image quality (signal to noise level, spatial resolution etc.) from different imaging systems it is very difficult to elaborate the construction method of the optimal-for-compression filters. Many issues relating to the choice of the most efficient filter bank for image compression remain still unresolved.2The demands of preserving the diagnostic accuracy in reconstructed medical images are exacting. Important high frequency coefficients which appear at the place of small structure edges in CT and MR images should be saved. Accurate global organ shapes reconstruction in US images and strong noise reduction in MN images is also required. It is rather difficult to imagine that one filter bank can do it in the best way. Rather choosing the best wavelet families for each modality is expected.Our aim is to increase the image compression efficiency, especially for medical applications, by applying suitable wavelet transformation, adaptive quantization scheme and corresponding processed decomposition tree entropy coding. We want to achieve higher acceptable compression ratios for medical images by better preserving the diagnostic accuracy of images. Many bit allocation techniques applied in quantization scheme are based on data distribution assumptions, quantiser distortion function etc. All statistical assumptions built on global data characteristics do not cover exactly local data behaviour and important detail of original image, e.g., different texture small area may be lost. Thus we decided to build quantization scheme on the base of local data characteristics such a direct data context in two dimensions mentioned earlier. We do data variance estimation on the base of real data set as spatial estimate for corresponding coefficient positions in successive subbands. The details of quantization process and correlated coding technique as a part of effective simple wavelet-based compression method which allows to achieve high reconstructed image quality at low bit rates are presented.2. THE COMPRESSION TECHNIQUEScheme of our algorithm is very simple: dyadic, 3 levels decomposition of original image (256×256 images were used) done by selected filters. For symmetrical filters symmetry boundary extension at the image borders was used and for asymmetrical filters - a periodic (or circular) boundary extension.Figure 1. Dyadic wavelet image decomposition scheme. - horizontal relations, - parent - children relations. LL - the lowest frequency subband.Our approach to filters is utilitarian one, making use of the literature to select the proper filters rather than to design them. We conducted an experiment using different kinds of wavelet transformation in presented algorithm. Long list of wavelet families and corresponding filters were tested: Daubechies, Adelson, Brislawn, Odegard, Villasenor, Spline, Antonini, Coiflet, Symmlet, Beylkin, Vaid etc.3 Generally Antonini 4 filters occurred to be the most efficient. Villasenor, Odegard and Brislawn filters allow to achieve similar compression efficiency. Finally: Antonini 7/9 tap filters are used for MR and US image compression and Villasenor 18/10 tap filters for CT image compression.2.1 Adaptive space-frequency quantizationPresented space-frequency quantization technique is realised as entire data pre-selection, threshold selection and scalar uniform quantization with step size conditioned by chosen compression ratio. For adaptive estimation of threshold and quantization step values two extra data structure are build. Entire data pre-selection allows to evaluate zero-quantized data set and predict the spatial context of each coefficient. Next simple quantization of the lowest frequency subband (LL) allows to estimate quantized coefficient variance prediction as a space function across sequential subbands. Next the value of quantization step is slightly modified by a model build on variance estimate. Additionally, a set of coefficients is reduced by threshold selection. The threshold value is increased in the areas with the dominant zero-valued coefficients and the level of growth depends on coefficient spatial position according variance estimation function.Firstly zero-quantized data prediction is performed. The step size w is assumed to be constant for all coefficients at each decomposition level. For such quantization model the threshold value is equal to w /2. Each coefficient whose value is less than threshold is predicted to be zero-valued after quantization (insignificant). In opposite case coefficient is predicted to be not equal to zero (significant). It allows to create predictive zero-quantized coefficients P map for threshold evaluation in the next step. The process of P map creation is as follows:if c w then p else p i i i <==/201, (1)where i m n m n =⋅−12,,...,;, horizontal and vertical image size , c i - wavelet coefficient value. The coefficient variance estimation is made on the base of LL data for coefficients from next subbands in corresponding spatial positions. The quantization with mentioned step size w is performed in LL and the most often occurring coefficient value is estimated. This value is named MHC (mode of histogram coefficient). The areas of MHC appearance are strongly correlated with zero-valued data areas in the successive subbands. The absolute difference of the LL quantized data and MHC is used as variance estimate for next subband coefficients in corresponding spatial positions. We tested many different schemes but this model allows to achieve the best results in the final meaning of compression efficiency. The variance estimation is rather coarse but this simple adaptive model built on real data does not need additional information for reconstruction process and increases the compression efficiency. Let lc i , i =1,2,...,lm , be a set ofLL quantized coefficient values, lm - size of this set . Furthermore let mode of histogram coefficient MHC value be estimated as follows:f MHC f lc MHC Al lc Al i i ()max ()=∈∈ and , (2)where Al - alphabet of data source which describes the values of the coefficient set and f lc n lmi lc i ()=, n lc i - number of lc i -valued coefficients. The normalised values of variance estimate ve si for next subband coefficients in corresponding to i spatial positions (parent - children relations from the top to the bottom of zerotree - see fig. 1) are simply expressed by the following equation: ve lc MHC ve si i =−max . (3)These set of ve si data is treated as top parent estimation and is applied to all corresponding child nodes in wavelet hierarchical decomposition tree.9-th order context model is applied for coarser data reduction in ‘unimportant' areas (usually with low diagnostic importance). The unimportance means that in these areas the majority of the data are equal to zero and significant values are separated. If single significant values appear in these areas it most often suggests that these high frequency coefficients are caused by noise. Thus the coarser data reduction by higher threshold allows to increase signal to noise ratio by removing the noise. At the edges of diagnostically important structures significant values are grouped together and the threshold value is lower at this fields. P map is used for each coefficient context estimation. Noncausal prediction of the coefficient importance is made as linear function of the binary surrounding data excluding considered coefficient significance. The other polynomial, exponential or hyperbolic function were tested but linear function occurred the most efficient. The data context shown on fig. 2 is formed for each coefficient. This context is modified in the previous data points of processing stream by the results of the selection with the actual threshold values at these points instead of w /2 (causal modification). Values of the coefficient importance - cim are evaluated for each c i coefficient from the following equation:cim coeff p i i j j =⋅−=∑1199(),, where i m n =⋅12,,...,. (4)Next the threshold value is evaluated for each c i coefficient: th w cim w ve i i si =⋅+⋅⋅−/(())211, (5)where i m n =⋅12,,...,, si - corresponding to LL parent spatial location in lower decomposition levels.The modified quantization step model uses the LL-based variance estimate to slightly increase the step size for less variance coefficients. Threshold data selection and uniform quantization is made as follows: each coefficient value is firstly compared to its threshold value and then quantized using w step for LL and modified step value mw si for next subbands . Threshold selection and quantization for each c i coefficient can be clearly described by the following equations:LLif c then c c welse if c th then c else c c mw i i i i i i i i si∈=<==//0, (6)where mw w coeff ve si si =⋅+⋅−(())112. (7)The coeff 1 and coeff 2 values are fitted to actual data characteristic by using a priori image knowledge and performingentire tests on groups of similar characteristic images.a) b)Figure 2. a) 9-order coefficient context for evaluating the coefficient importance value in procedure of adaptive threshold P map context of single edge coefficient.2.2 Zerotrees construction and codingSophisticated entropy coding methods which can significantly improve compression efficiency should retain progressive way of data reconstruction. Progressive reconstruction is simple and natural after wavelet-based decomposition. Thus the wavelet coefficient values are coded subband-sequentially and spectral selection is made typically for wavelet methods. The same scale subbands are coded as follows: firstly the lowest frequency subband, then right side coefficient block, down-left and down-right block at the end. After that next larger scale data blocks are coded in the same order. To reduce a redundancy of such data representation zerotree structure is built. Zerotree describes well the correlation between data values in horizontal and vertical directions, especially between large areas with zero-valued data. These correlated fragments of zerotree are removed and final data streams for entropy coding are significantly diminish. Also zerotree structure allows to create different characteristics data streams to increase the coding efficiency. We used simple arithmetic coders for these data streams coding instead of applied in many techniques bit map (from MSB to LSB) coding with necessity of applying the efficient context model construction. Because of refusing the successive approximation we lost full progression. But the simplicity of the algorithm and sometimes even higher coding efficiency was achieved. Two slightly different arithmetic coders for producing ending data stream were used.2.2.1 Construction and pruning of zerotreeThe dyadic hierarchical image data decomposition is presented on fig. 1. Decomposition tree structure reflects this hierarchical data processing and strictly corresponds to created in transformation process data streams. The four lowest frequency subbands which belong to the coarsest scale level are located at the top of the tree. These data have not got parent values but they are the parents for the coefficients in lower tree level of greater scale in corresponding spatial positions. These correspondence is shown on the fig. 1 as parent-children relations. Each parent coefficient has got four direct children and each child is under one direct parent. Additionally, horizontal relations at top tree level are introduced to describe the data correlation in better way.The decomposition tree becomes zerotree when node values of quantized coefficients are signed by symbols of binary alphabet. Each tree node is checked to be significant (not equal to zero) or insignificant (equal to zero) - binary tree is built. For LL nodes way of significance estimation is slightly different. The MHC value is used again because of the LL areas of MHC appearance strong correlation with zero-valued data areas in the next subbands. Node is signed to be significant if its value is not equal to MHC value or insignificant if its value is equal to MHC. The value of MHC must be sent to a decoder for correct tree reconstruction.Next step of algorithm is a pruning of this tree. Only the branches to insignificant nodes can be pruned and the procedure is slightly other at different levels of the zerotree. Procedure of zerotree pruning starts at the bottom of wavelet zerotree. Sequential values of four children data and their parent from higher level are tested. If the parent and the children are insignificant - the tree branch with child nodes is removed and the parent is signed as pruned branch node (PBN). Because of this the tree alphabet is widened to three symbols. At the middle levels the pruning of the tree is performed if the parent value is insignificant and all children are recognised as PBN. From conducted research we found out that adding extra symbols to the tree alphabet is not efficient for decreasing the code bit rate. The zerotree pruning at top level is different. The checking node values is made in horizontal tree directions by exploiting the spatial correlation of the quantized coefficients in the subbands of the coarsest scale - see fig. 1. Sequentially the four coefficients from the same spatial positions and different subbands are compared with one another. The tree is pruned if the LL node is insignificant and three corresponding coefficients are PBN. Thus three branches with nodes are removed and LL node is signed as PBN. It means that all its children across zerotree are insignificant. The spatial horizontal correlation between the data at other tree levels is not strong enough to increase the coding efficiency by its utilisation.2.2.2 Making three data streams and codingPruned zerotree structure is handy to create data streams for ending efficient entropy coding. Instead of PBN zero or MHC values (nodes of LL) additional code value is inserted into data set of coded values. Also bit maps of PBN spatial distribution at different tree levels can be applied. We used optionally only PBN bit map of LL data to slightly increase the coding efficiency. The zerotree coding is performed sequentially from the top to the bottom to support progressive reconstruction. Because of various quantized data characteristics and wider alphabet of data source model after zerotree pruning three separated different data streams and optionally fourth bit map stream are produced for efficient data coding. It is well known from information theory that if we deal with a data set with significant variability of data statistics anddifferent statistics (alphabet and estimate of conditional probabilities) data may be grouped together it is better to separate these data and encode each group independently to increase the coding efficiency. Especially is true when context-based arithmetic coder is used. The data separation is made on the base of zerotree and than the following data are coded independently:- the LL data set which has usually smaller number of insignificant (MHC-valued) coefficients, less PBN and less spatial data correlation than next subband data (word- or charwise arithmetic coder is less efficient then bitwise coder);optionally this data stream is divided on PBN distribution bit map and word or char data set without PBNs,- the rest of top level (three next subbands) and middle level subband data set with a considerable number of zero-valued (insignificant) coefficients and PBN code values; level of data correlation is greater, thus word- or charwise arithmetic coder is efficient enough,- the lowest level data set with usually great number of insignificant coefficients and without PBN code value; data correlation is very high.Urban Koistinen arithmetic coder (DDJ Compression Contest public domain code accessible by internet) with simple bitwise algorithm is used for first data stream coding. For the second and third data stream coding 1-st order arithmetic coder built on the base of code presented in Nelson book 5 is applied. Urban coder occurred up to 10% more efficient than Nelson coder for first data stream coding. Combining a rest of top level data and the similar statistics middle level data allows to increase the coding efficiency approximately up to 3%.The procedure of the zerotree construction, pruning and coding is presented on fig. 3.Construction ofbinary zerotreeBitwise arithmetic codingFinal compressed data representationFigure 3. Quantized wavelet coefficients coding scheme with using zerotree structure. PBN - pruned branch node.3. TESTS, RESULTS AND DISCUSSIONIn our tests many different medical modality images were used. For chosen results presentation we applied three 256×256×8-bit images from various medical imaging systems: CT (computed tomography), MR (magnetic resonance) and US(ultrasound) images. These images are shown on fig. 4. Mean square error - MSE and peak signal to noise ratio - PSNR were assumed to be reconstructed image quality evaluation criteria. Subjective quality appreciation was conducted in very simple way - only by psychovisual impression of the non-professional observer.Application of adaptive quantization scheme based on modified threshold value and quantization step size is more efficient than simple uniform scalar quantization up to 10% in a sense of better compression of all algorithm. Generally applying zerotree structure and its processing improved coding efficiency up to 10% in comparison to direct arithmetic coding of quantized data set.The comparison of the compression efficiency of three methods: DCT-based algorithm,6,7 SPIHT 8 and presented compression technique, called MBWT (modified basic wavelet-based technique) were performed for efficiency evaluation of MBWT. The results of MSE and PSNR-based evaluation are presented in table 1. Two wavelet-based compression techniques are clearly more efficient than DCT-based compression in terms of MSE/PSNR and also in our subjective evaluation for all cases. MBWT overcomes SPIHT method for US images and slightly for CT test image at lower bit rate range.The concept of adaptive threshold and modified quantization step size is effective for strong reduction of noise but it occurs sometimes too coarse at lower bit rate range and very small details of the image structures are put out of shape. US images contain significant noise level and diagnostically important small structures do not appear (image resolution is poor). Thus these images can be efficiently compressed by MBWT with image quality preserved. It is clearly shown on fig.5. An improvement of compression efficiency in relatio to SPIHT is almost constant at wide range of bit rates (0.3 - 0.6 dB of PSNR).a) b)c)Figure 4. Examples of images used in the tests of compression efficiency evaluation. The results presented in table 1 and on fig. 5 were achieved for those images. The images are as follows: a ) echocardiography image, b) CT head image, c) MR head image.Table 1. Comparison of the three techniques compression efficiency: DCT-based, SPIHT and MBWT. The bit rates are chosen in diagnostically interesting range (near the borders of acceptance).Modality - bit rateDCT-based SPIHT MBWTMSE PSNR[dB] MSE PSNR[dB] MSE PSNR[db] MRI - 0.70 bpp8.93 38.62 4.65 41.45 4.75 41.36 MRI - 0.50 bpp13.8 36.72 8.00 39.10 7.96 39.12 CT - 0.50 bpp6.41 40.06 3.17 43.12 3.1843.11 CT - 0.30 bpp18.5 35.46 8.30 38.94 8.0639.07 US - 0.40 bpp54.5 30.08 31.3 33.18 28.3 33.61 US - 0.25 bpp 91.5 28.61 51.5 31.01 46.8 31.43The level of noise in CT and MR images is lower and small structures are often important in image analysis. That is the reason why the benefits of MBWT in this case are smaller. Generally compression efficiency of MBWT is comparable to SPIHT for these images. Presented method lost its effectiveness for higher bit rates (see PSNR of 0.7 bpp MR representation) but for lower bit rates both MR and CT images are compressed significantly better. Maybe the reason is that the coefficients are reduced relatively stronger because of its importance reduction in MBWT threshold selection at lower bits rate range.0,20,30,40,50,60,70,8Rate in bits/pixel PSNR in dBFigure 5. Comparison of SPIHT and presented in this paper technique (MBWT) compression efficiency at range of low bit rates. US test image was compressed.4. CONCLUSIONSAdaptive space-frequency quantization scheme and zerotree-based entropy coding are not time-consuming and allow to achieve significant compression efficiency. Generally our algorithm is simpler than EZW-based algorithms 9 and other algorithms with extended subband classification or space -frequency quantization models 10 but compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images. The MBWT-based compression gives slightly better results than SPIHT for high quality images: CT and MR and significantly better efficiency for US images. Presented compression technique occurred very useful and promising for medical applications. Appropriate reconstructed image quality evaluation is desirable to delimit the acceptable lossy compression ratios for each medical modality. We intend to improve the efficiency of this method by: the design a construction method of adaptive filter banks and correlated more sufficient quantization scheme. It seems to be possible byapplying proper a priori model of image features which determine diagnostic accuracy. Also more efficient context-based arithmetic coders should be applied and more sophisticated zerotree structures should be tested.REFERENCES1.Hui, C. W. Kok, T. Q. Nguyen, …Image Compression Using Shift-Invariant Dydiadic Wavelet Transform”, subbmited toIEEE Trans. Image Proc., April 3nd, 1996.2.J. D. Villasenor, B. Belzer and J. Liao, …Wavelet Filter Evaluation for Image Compression”, IEEE Trans. Image Proc.,August 1995.3. A. Przelaskowski, M.Kazubek, T. Jamrógiewicz, …Optimalization of the Wavelet-Based Algorithm for Increasing theMedical Image Compression Efficiency”, submitted and accepted to TFTS'97 2nd IEEE UK Symposium on Applications of Time-Frequency and Time-Scale Methods, Coventry, UK 27-29 August 1997.4.M. Antonini, M. Barlaud, P. Mathieu and I. Daubechies, …Image coding using wavelet transform”, IEEE Trans. ImageProc., vol. IP-1, pp.205-220, April 1992.5.M. Nelson, The Data Compression Book, chapter 6, M&T Books, 1991.6.M. Kazubek, A. Przelaskowski and T. Jamrógiewicz, …Using A Priori Information for Improving the Compression ofMedical Images”, Analysis of Biomedical Signals and Images, vol. 13,pp. 32-34, 1996.7. A. Przelaskowski, M. Kazubek and T. Jamrógiewicz, …Application of Medical Image Data Characteristics forConstructing DCT-based Compression Algorithm”, Medical & Biological Engineering & Computing,vol. 34, Supplement I, part I, pp.243-244, 1996.8. A. Said and W. A. Pearlman, …A New Fast and Efficient Image Codec Based on Set Partitioning in Hierarchical Trees”,submitted to IEEE Trans. Circ. & Syst. Video Tech., 1996.9.J. M. Shapiro, …Embedded Image Coding Using Zerotrees of Wavelet Coefficients”, IEEE Trans. Signal Proces., vol.41, no.12, pp. 3445-3462, December 1993.10.Z. Xiong, K. Ramchandran and M. T. Orchard, …Space-Frequency Quantization for Wavelet Image Coding”, IEEETrans. Image Proc., to appear in 1997.。

wavelet image inpainting

wavelet image inpainting

1
coding JPEG2000 image files and data loss during wireless transmission processes, for example, could both result in the need for filling-in the missing information in the wavelet domain instead of the pixel domain. Working on the wavelet domain, instead of the pixel domain, changes the nature of the inpainting problem, since damages to wavelet coefficients can create correlated damage patterns in the pixel domain. For instance, for wavelets based image inpainting problems, there is usually no corresponding clear cut inpainting regions, which is however necessary for most existing PDE based inpainting models in pixel domains. A direct consequence of this lack of geometric regularity of inpainting regions is the prohibition of geometric interpolation techniques in pixel domains [39]. On the other hand, direct interpolation in the wavelet domain is also problematic, as wavelet coefficients (except the low frequencies) are calculated to decouple the correlation between neighboring pixels. Retained high frequency coefficients provide minimal information for the missing coefficients. For this reason, many contemporary error concealment methods for JPEG (built upon discrete cosine transforms (DCT)) or JPEG2000 (upon wavelets) images require the additional control of regularity in pixel domains, in addition to direct operations in transformed domains (i.e., DCT or wavelets). Such examples include HemamiGray’s bi-cubic Coons surface [34], non-uniform rational B-spline (NURBS) by ParkLee [44] and by Cheng et. al. [18], Niu-Poston’s harmonic postprocessing techniques [43], Sapiro and collaberators’ separate reconstruction techniques for structures and textures [47], and least square minimization in wavelet-domain reconstructions [46]. However, these JPEG based error concealment methods usually work on images that have already been partitioned into 8 × 8 or 16 × 16 blocks. Each missing or damaged block corresponds to a well defined square region to be filled-in in the pixel domain. This is different from our current work, for which no assumption is made of the block partitioning. Depending on the scales or resolutions, missing or damaged wavelet coefficients could cause degradation wide spread in the pixel domain. That is, even a few coefficients can potentially affect all pixels. Moreover, unlike denoising problems in which the perturbation in the pixel domain is mostly homogeneous, the degradation in wavelet inpainting problems is usually inhomogeneous (different regions can suffer different level of damages). This new phenomenon demands different treatments in different regions in the pixel domain. These novel features and challenges call for new models and methods of image inpainting in the wavelet domain. An important guiding principle for us is that even though the primary goal is to fill in the missing coefficients in the wavelet domain, it is important to control the regularity in the pixel domain, so that the inpainted images retain important geometrical features, especially when noise is present. Such considerations have motivated the variational PDE approach in our current work. The variational PDE technique has been widely used in numerous applications such as image segmentation [7] [13] [52], restoration [14], [48], and compression [16] [26]. Even for traditional image inpainting, PDE methods have been well studied [3] [9] [11] [28] [2]. The growing impact of PDE techniques in image processing is mainly due to their capability in controlling geometrical features of images. PDE’s (many of them are derived from variational principles) are usually designed to possess certain desirable geometrical properties. For example, total variation (TV) minimization, which leads to a curvature term in the corresponding Euler-Lagrange equation, can retain sharp

Shearlet多方向特征融合与加权直方图的人脸识别算法

Shearlet多方向特征融合与加权直方图的人脸识别算法

t h e S h a n n o n e n t r o p y t h e o r y . Ma n y e x p e ime r n t s h a v e b e e n d o n e o n t h e OR L,F E RE T a n d YALE f a c e d a t a b a s e , wh i c h
中图分类号 :T P 3 9 1 . 4 1
文献标 志码 :A
d o i :1 0 . 3 9 6 9  ̄ . i s s n . 1 0 0 3 — 5 0 1 X. 2 0 1 3 . 1 1 . 0 1 5
Fa c e Re c o g ni t i o n Ba s e d o n She a r l e t M ul t i — o r i e nt a t i o n
S h e a r l e t 多方 向特征融合与 加权直方 图的人脸识别算法
周 霞 ,张 鸿杰 ,王 宪
(轻工过程先进控制教育部重点实验室( 江南大学) ,江苏 无锡 2 1 4 1 2 2 )
摘要 :针对 S h e a r l e t变换在提取特征数据 时存在 冗余性 以及无法对全局 特征进 行稀疏表征 的缺 点,提 出 了一种 S h e a r l e t 多方 向特征融合与加权直方 图的人脸识别算法。首先 。对原始 图像采用 S h e a r l e t 变换得 到多尺度 多方 向的
人 脸 特 征 , 然后 按 照 两种 编 码 方 式将 同一 尺 度 下 不 同 方 向的 特 征 进 行 编码 融 合 , 并将 融合 后 的尺 度 图像 划 分 为 若
干 大 小 相等 的不 重 叠矩 形 块 ,利 用 S h a n n o n 熵理 论 对 各 子 模 式进 行 加 权 融 合 。在 OR L、F E R E T和 Y AL E 人 脸 库 中做 了 多组 实验 , 充分 证 明该 算 法 相 对 于 传 统 S h e a r l e t 滤 波 器 在 分 类 识 别 上 更 具 有优 势 。 关 键 词 : 人 脸 识 别 ; 加 权 直 方 图; 特 征 融 合 ; S h e a r l e t 变 换

Shearlet变换域内容自适应图像水印算法

Shearlet变换域内容自适应图像水印算法

Shearlet变换域内容自适应图像水印算法
廖颀
【期刊名称】《电子技术应用》
【年(卷),期】2010(000)010
【摘要】提出了一种Shearlet变换域嵌入强度因子能随图像内容自适应的水印算法.算法首先利用Shearlet变换对图像内容稀疏表示选择性,找出符合人眼视觉特性要求的水印嵌入位置,然后依据图像内容自适应地计算出水印嵌入强度因子,较好地解决了鲁棒性与可见性之间的矛盾.实验结果表明,该算法能够抵抗各种攻击,具有强鲁棒性.
【总页数】5页(P139-142,146)
【作者】廖颀
【作者单位】赣南师范学院物理与电子信息学院,江西赣州,341000
【正文语种】中文
【中图分类】TP391.41
【相关文献】
1.基于内容的离散余弦变换域自适应遥感图像数字水印算法 [J], 王向阳;杨红颖;邬俊
2.Shearlet变换域自适应图像去噪算法 [J], 朱华生;徐晨光
3.基于内容自适应的优化DWT-HMM顽健图像水印算法 [J], 王春桃;倪江群;黄继武;张荣跃;罗锡璋
4.基于图像内容的自适应量化双水印算法 [J], 陈力;韩娜
5.DCT变换域中的自适应图像水印算法 [J], 杨艺敏
因版权原因,仅展示原文概要,查看原文内容请购买。

基于多层特征嵌入的单目标跟踪算法

基于多层特征嵌入的单目标跟踪算法

基于多层特征嵌入的单目标跟踪算法1. 内容描述基于多层特征嵌入的单目标跟踪算法是一种在计算机视觉领域中广泛应用的跟踪技术。

该算法的核心思想是通过多层特征嵌入来提取目标物体的特征表示,并利用这些特征表示进行目标跟踪。

该算法首先通过预处理步骤对输入图像进行降维和增强,然后将降维后的图像输入到神经网络中,得到不同层次的特征图。

通过对这些特征图进行池化操作,得到一个低维度的特征向量。

将这个特征向量输入到跟踪器中,以实现对目标物体的实时跟踪。

为了提高单目标跟踪算法的性能,本研究提出了一种基于多层特征嵌入的方法。

该方法首先引入了一个自适应的学习率策略,使得神经网络能够根据当前训练状态自动调整学习率。

通过引入注意力机制,使得神经网络能够更加关注重要的特征信息。

为了进一步提高跟踪器的鲁棒性,本研究还采用了一种多目标融合的方法,将多个跟踪器的结果进行加权融合,从而得到更加准确的目标位置估计。

通过实验验证,本研究提出的方法在多种数据集上均取得了显著的性能提升,证明了其在单目标跟踪领域的有效性和可行性。

1.1 研究背景随着计算机视觉和深度学习技术的快速发展,目标跟踪在许多领域(如安防、智能监控、自动驾驶等)中发挥着越来越重要的作用。

单目标跟踪(MOT)算法是一种广泛应用于视频分析领域的技术,它能够实时跟踪视频序列中的单个目标物体,并将其位置信息与相邻帧进行比较,以估计目标的运动轨迹。

传统的单目标跟踪算法在处理复杂场景、遮挡、运动模糊等问题时表现出较差的鲁棒性。

为了解决这些问题,研究者们提出了许多改进的单目标跟踪算法,如基于卡尔曼滤波的目标跟踪、基于扩展卡尔曼滤波的目标跟踪以及基于深度学习的目标跟踪等。

这些方法在一定程度上提高了单目标跟踪的性能,但仍然存在一些局限性,如对多目标跟踪的支持不足、对非平稳运动的适应性差等。

开发一种既能有效跟踪单个目标物体,又能应对多种挑战的单目标跟踪算法具有重要的理论和实际意义。

1.2 研究目的本研究旨在设计一种基于多层特征嵌入的单目标跟踪算法,以提高目标跟踪的准确性和鲁棒性。

基于3次B样条小波变换的改进自适应阈值边缘检测算法

基于3次B样条小波变换的改进自适应阈值边缘检测算法

计算技术与自动化Computing Technology and Automation第40卷第1期2 0 2 1年3月Vol. 40,No. 1Mar. 2 02 1文章编号:1003-6199( 2021 )01-0101 — 03DOI : 10. 16339/j. cnki. jsjsyzdh. 202101019基于3次B 样条小波变换的改进自适应阈值边缘检测算法王 煜J 谢 政,朱淳钊,夏建高(湖北工程职业学院建筑与环境艺术学院,湖北黄石435005)摘要:针对含噪声图像边缘提取问题,提出了一种改进NormalShrink 自适应阈值去噪算法。

该算法首先通过小波变换和局部模极大值法提取出可能包含图像边缘特征的小波系数,利用边缘像素之间特殊的空间关系以及噪声在各级小波分解尺度下的不同效应,构建适合各个尺度级的改进NormalShrink 自适应阈值,并依此对提取出的小波系数进行筛选。

实验结果表明,与改进的Candy 算子和传统的NormalShrink 自 适应阈值相比,本方法提取出的图像边缘较为完整清晰,峰值信噪比提升约6 db o关键词:边缘提取;小波变换;自适应阈值;峰值信噪比中图分类号:TP312文献标识码:AAn Improved Adaptive Threshold Edge Detection AlgorithmBased on Cubic B-spline Wavelet TransformWANG Yu f , XIE Zheng,ZHU Chun-zhao ,XIA Jian-gao(School of Architecture and Environmental Art, Hubei Engineering Institute, Huangshi, Hubei 435005, China)Abstract : In order to solve the problem of noisy image edge detection, an improved NormalShrink adaptive waveletthreshold is put forward on the foundation of combining edge detection and denoising . According to the different characteris ­tics of noise at different wavelet scales and the special spatial relationship between the edge pixels , the algorithm first extract wavelet coefficients which may contain image edge feature by using wavelet transform and local maximum mode, and thenconstruct an improved NormalShrink adaptive threshold of each scale level which is used to select the extracted wavelet coef ­ficients. Experimental results show that this method can keep imagers edges clear and increase PSNR about 6 db.Key words :edge detection ; wavelet transform ; adaptive threshold ; PSNR图像边缘信息的识别和提取在图像分割、图像 识别等领域有着重要的应用,提取出清晰有效的边缘是一个热点研究方向。

一种新的平移不变Shearlet变换域图像去噪算法

一种新的平移不变Shearlet变换域图像去噪算法

一种新的平移不变Shearlet变换域图像去噪算法
石满红;刘卫
【期刊名称】《红外技术》
【年(卷),期】2016(038)001
【摘要】提出了一种基于双边滤波和正态逆高斯模型的平移不变Shearlet变换域图像去噪算法.对图像进行平移不变Shearlet变换分解,低频子带采用快速双边滤波算法处理,高频子带采用正态逆高斯模型对其进行建模,在贝叶斯最大后验概率估计准则下推导出与正态逆高斯模型相对应的阈值函数表达式,从而达到去除图像噪声的目的.在对不同类型的图像进行仿真实验,其结果表明了本文方法不仅具有良好的视觉效果,而且具有较高的峰值信噪比和平均结构相似性.
【总页数】8页(P33-40)
【作者】石满红;刘卫
【作者单位】安徽科技学院信息与网络工程学院,安徽凤阳233100;中国科学院合肥智能机械研究所,安徽合肥230031
【正文语种】中文
【中图分类】TP391
【相关文献】
1.一种基于Shearlet变换域改进自适应中值滤波算法 [J], 柴成林
2.Shearlet变换域自适应图像去噪算法 [J], 朱华生;徐晨光
3.Shearlet变换域局域最优非线性盲水印检测算法 [J], 许志良;邓承志
4.一种改进全变差正则化的Shearlet自适应带钢图像去噪算法 [J], 韩英莉
5.基于平移不变剪切波变换域图像融合算法 [J], 刘卫;殷明;栾静;郭宇
因版权原因,仅展示原文概要,查看原文内容请购买。

纹理物体缺陷的视觉检测算法研究--优秀毕业论文

纹理物体缺陷的视觉检测算法研究--优秀毕业论文

摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II

ADAPTIVE WAVELET PACKET IMAGE CODING USING AN ESTIMATION-QUANTIZATION FRAMEWORK

ADAPTIVE WAVELET PACKET IMAGE CODING USING AN ESTIMATION-QUANTIZATION FRAMEWORK

ADAPTIVE W A VELET PACKET IMAGE CODING USING AN ESTIMATION-QUANTIZATION FRAMEWORK Mehmet Kıvan¸c Mıh¸c ak,Kannan Ramchandran and Pierre MoulinU.of Illinois,Beckman Inst.and ECE Dept405N.Mathews Ave.,Urbana,IL61801Email:{mihcak,kannan,moulin}@ABSTRACTIn this paper,we extend the statistical model-based Estimation-Quantization(EQ)wavelet image coding algorithm introduced in[?]to include an adaptive trans-form component.For this,we resort to the rich,space-frequency diverse,and easy-to-search library of trans-forms provided by the family of wavelet packet(WP) bases and their adaptive extensions.We use rate-distortion criteria tofind the best basis jointly with the statistical model-based best adaptive quantization and entropy coding strategy of[?]based on an efficient and fast tree pruning algorithm.A key underlying attribute of our paradigm is that the spatially-varying General-ized Gaussian mixture model for wavelet coefficients introduced in[?]is also applicable to the more arbi-trary framework of(adaptive)wavelet packet trans-form coefficients as well.Our WP-EQ framework pro-duces excellent results on standard test images.The most attractive property of our paradigm is its“uni-versality”and robustness:based on an overall perfor-mance criterion that considers diverse classes of input test images that have varying space-frequency charac-teristics,it is more powerful than most of the existing image coding algorithms,using reasonable complexity, and a generic,integrated,non-training based frame-work.1.INTRODUCTIONWavelet-based frameworks represent the current state-of-the-art in image compression.Several top-notch wavelet image coders have been developed in the re-cent years[?,?,?,?,?,?]using a myriad of inno-vative ways of exploiting the space-frequency charac-terization of the wavelet image decomposition.At a high level,these coders owe their success primarily to their ability to augment the frequency compaction property of the wavelet decomposition with spatially adaptive quantization and/or entropy coding methods that capture the spatial characteristics of the wavelet representation as well,based on frameworks involving spatial classification[?,?,?,?],estimation[?],etc.A key observation is that these coders process coef-ficients of afixed transform,namely the wavelet trans-form.While thisfixed transform may be statistically optimal for a large class of input image processes that are well matched in time-frequency characterizationto that of the wavelet transform(e.g.images like the well-known“Lena”test image),being equipped with theflexibility in the choice of transform may be use-ful when considering arbitrary image classes with ei-ther unknown,or space-varying or mismatched-to-the-wavelet space-frequency characteristics(e.g.images like the“Barbara”test image).That is,while the spa-tial adaptation exhibited in the the“post-transform”processing deployed by the current state-of-the-art wavelet coders is no doubt powerful,there may be added gainto considering a more encompassing framework that allows the decorrelating linear transform itself to be additionally adaptable either globally,or more power-fully,locally to the image statistics.In this paper,we explore the use of the rich libraryof transforms provided by the family of wavelet-packet expansions[?]and their spatially adaptive extensions [?](see Fig.??).The main attraction of this paradigmis that it provides a very large and rich library of space-frequency atoms,from which the best basis canbe searched for using a fast tree-structured algorithm, and for which,in practice,the side-information neededto convey the choice of the winning basis is negligible [?,?].The challenge is then tofind an efficient strat-egy to jointly optimize the choice of the transform and the post-transform processing(i.e.,quantization and entropy-coding).An efficient way to do this in the rate-distortion sense has been described for a wavelet packet framework in[?]and for a more general adap-tive wavelet-packet framework in[?].However,the post-transform processing considered in those worksis fairly simplistic compared to the sophisticated spa-tially adaptive quantization and entropy-coding meth-ods espoused in current state-of-the-art coders(e.g., the EQ wavelet coder[?]),as noted by the actual cod-ing performances.The main contribution of this workis to integrate the sophisticated adaptive quantization and entropy-coding strategy of the EQ wavelet coderof[?]with the powerful adaptive wavelet packet ba-sis framework of[?,?]to formulate a powerful uni-versal framework that subsumes existing wavelet cod-ing paradigms,and that can handle diverse classes of input images.The idea of combining a power-ful coding scheme with adaptive wavelet packet ba-sis has been applied in[?];it is a zero-tree based wavelet image coder and it tries to capture the depen-dencies between different bands.However,the image coding algorithm presented in this paper is not only faster,but also produces better results,since it em-ploys spatial dependencies within a certain subband. Our simulation results indicate that there is appre-ciable performance gain for certain test images(e.g., the“Barbara”and“Goldhill”images)but negligible gain for others(e.g.,the“Lena”image).However, we emphasize that the power of our framework lies in its universality at reasonable computational cost, i.e.we can handle arbitrary image classes,with the satisfaction of strictly outperforming the correspond-ing(fixed)wavelet-based coding framework,at a rea-sonable increase in complexity,thanks to fast tree-pruning algorithms.Our test results indicate the uni-versal promise of our framework.FrequencyTime FrequencyTimeFrequencyTime(a)(b)(c) Figure1:Different tree tilings on the time-frequency plane:(a)Basis generated by wavelet tree(b)Ex-ample of a basis generated by wavelet packets(c)Ex-ample of a basis generated by a space-frequency tree. Notice that in(b),no spatial segmentation in the bases selection is allowed.2.THE TREE STRUCTURES2.1.The Frequency Tree Structure-Wavelet PacketsThe complete subband decomposition of a discrete signal set can be represented by“the frequency tree structure”,formed by recursivelyfiltering and down-sampling on both low and high frequency branches of the tree up to a certain depth(Fig.??).A spe-cific choice of a decomposition topology,correspond-ing to any pruned subtree of the original frequency tree,would represent an orthonormal basis to expand the input signal.The set of all such orthonormal ba-sis form“wavelet packets”[?].In2D manipulations, each parent would potentially have4children,each of which denoted as LL,LH,HL and HH.(a)(b)(c)LF : Low freqency branchHF : High frequency branchLFLFLFLFLFLFLFLFLFLFLF LFLFLFHFHFHFHFHFHFHFHFHFHFHFHFHFHFHFHFHFHFLFLF LFLFFigure2:Tree structures in1D(a)The frequency tree structure up to depth2,(b)The wavelet tree structure of depth2,(c)Arbitrary wavelet packet trees with a maximum depth of2.2.2.The Space-Frequency Tree StructureThe“space-frequency tree”provides a broader set of basis for expansion than does the frequency tree by adding the option of splitting in spatial domain,i.e. spatial adaptation to the frequency tree structure.In 2D manipulations,each parent of the space-frequency tree has up to8children,4of which are the frequency children discussed in the previous subsection and the other4are the space children.The spatial children are the1st,2nd,3rd and4th spatial quadrants(Fig.??). Space-frequency bases are constructed from the space-frequency tree structure,by either not splitting,or splitting in the frequency domain(that is low-pass and high-passfiltering followed by downsampling)or splitting in the spatial domain from the middle both vertically and horizontally.3.IMAGE CODING ALGORITHMS3.1.EQ CoderIn this work,the so-called Estimation-Quantization (EQ)coder has been used at each subband separately. It has been developed recently in[?].It is based on an independent“infinite”mixture model which accu-rately captures the space-frequency characterizationchild1st quarter 2nd quarter child 3rd quarter child 4th quarterchild ABC DLL childLH childHL childHH childBCDASPACE CHILDREN FREQUENCY CHILDRENPARENT NODEFigure 3:Parent node,frequency children and space children at a particular node of the space-frequency tree structure.within each pression efficiency is com-bined with speed.The image wavelet coefficients are modeled as be-ing drawn from an independent Generalized Gaussian distribution field,of fixed unknown shape for each sub-band,having zero mean and unknown slowly spatially-varying variances.The EQ framework consists of first finding the Maximum-Likelihood estimate of the indi-vidual spatially-varying coefficient field variances based on causal and quantized spatially-varying contexts;followed by applying an off-line Rate-Distortion op-timized quantization/entropy coding strategy,imple-mented as a fast lookup table,that is optimally matched to variance estimates.In the estimation step above,the variance estimates are obtained from a causal “quan-tized”spatial neighborhood set,which is available at the decoder simultaneously.This set can also include cross-scale spatial-tree coefficients to enrich the infor-mation of the domain.The quantization framework is potentially very general,can range from simple uni-form scalar quantization to entropy-constrained scalar quantization,with the advantage that this can be done “off-line”based on a unit variance R-D quantization table.In this work,deadzone quantizers have been used.The EQ paradigm also allows dynamic switch-ing between forward and backward adaptation,the decision is made based on the reliability of causal pre-diction contexts.It is based on pixel-level adaptivity and hence pro-duces a very accurate mixture field separation.The scheme elegantly adapts itself to all target bitrates,aided by the dynamic partitioning of the data into the forward and backward operation modes and the flexibility of the GGD model.Moreover,it combines theoretical elegance with a significance improvement in speed since it allows the off-line predesign of R-D optimized quantizer/entropy coding tables.Unlike the coding schemes used in zero-tree based wavelet image coders,it captures the correlations within eachsubband and thus it is inherently more suitable to be used in an adaptive wavelet packet based image coding framework.3.2.Wavelet Packet-EQ and Space Frequency-EQ AlgorithmsThe new WP-EQ (Wavelet Packet-EQ)and SF-EQ(Space Frequency-EQ)algorithms,we propose,com-bine the frequency and space-frequency tree pruning algorithms [?,?]with the EQ technique [?].Un-like other spatially adaptive quantization schemes [?,?],which capture the spatial characteristics of the transform coefficients by exploiting the correlations between different subbands,the EQ framework can naturally be integrated to the tree-pruning algorithms,since it is based on spatial adaptation within each sub-band [?].In WP-EQ the frequency tree and in SF-EQ the space-frequency tree is grown up to a certain depth as explained in the previous section.Then,having found the rate-distortion cost at each node using the EQ coder,in a bottom-to-top fashion,the cost of each parent node is compared with the sum of the costs of its frequency children for WP-EQ;the sum of the costs of its space children and the sum of the costs of its frequency children separately for SF-EQ.For WP-EQ,the decision at each node is either to split or not to split,whereas for SF-EQ the decision at each node is either not to split,or to split in frequency or to split in space.Having found the optimal tree struc-ture,the transform coefficients at the leaves of the tree are encoded using the EQ coder.The pseuocode de-scriptions of the WP-EQ and SF-EQ algorithms are presented in Table 1.The decoder,simply finds the quantized coefficients in the leave nodes and then in a bottom-to-top fashion the coefficients are decoded.The cost of sending a description of the winning basis (optimal tree structure)was shown to be negligibly small in [?,?]for all practical scenarios.Post-transform processing used in this work is not only faster,but also more powerful than most of the other tree-pruning algorithms proposed up to now [?,?],as shown by the performance of our algorithms.Furthermore,the reasonable complexity needed to find the optimal solution could be reduced using faster,near-optimal heuristics.Instead of using the actual coder to find the optimal tree structure,good esti-mates of rate-distortion values at each node can be employed to prune the tree.4.EXPERIMENTAL RESULTSIn this section,we present and compare the resultsof the wavelet packet-EQ(WP-EQ)and the space frequency-EQ(SF-EQ)algorithms with the best known results in the literature[?].Table??presents PSNR results for the“Lena”,“Barbara”and“Goldhill”im-ages.We used trees of maximum depth5and biorthog-onal10/18filters.The cost of sending the overhead information is included in the results presented and shown to be negligible.For arbitrary tree structures,itis not clear how to employ the cross-scale dependencies within the EQ framework,thus the EQ coder was usedin the“parent-off”mode,that is we did not exploit the correlation between different subbands for variance es-timation.We anticipate that further,yet incremental, gains might be possible when the correlation betweendifferent subbands is exploited in the EQ coder.Note that for“Lena”,the optimal wavelet-packet tree struc-ture turns out to be exactly the wavelet tree;whereas for“Barbara”and“Goldhill”,the optimal wavelet-packet and space-frequency tree structures are fairlydifferent from the wavelet tree structure.The results presented are either superior to the most powerful im-age coders in the literature or marginally close to the best result reported,for any standard image that has been worked on.Since these three images are good ex-amples of different image textures,it is a good indica-tion of the fact that our algorithms perform extremely well for images from diversefields.5.CONCLUSIONSA new image coding scheme has been presented,whichis the extension of the statistical model-based Estimation-Quantization framework to include an adaptive trans-form component,which consists of rich,space-frequency diverse family of bases.The proposed adaptive trans-form systems employ a wavelet-packet tree and a space-frequency tree respectively.The coders are robust todifferent image characteristics and perform either the best or very close to the best for any type of image that we have tested.The usage of wavelet packets in the EQ framework clearly augments the coding perfor-mance over wavelet tree-EQ algorithm in a substantial amount,whereas spatial segmentation(SF-EQ algo-rithm)does not increase the performance over WP-EQin a comparable amount.We believe that this is dueto the fact that the gains which can be obtained by exploiting spatial correlation have mostly been used by the EQ coder’s spatially adaptive scheme.We be-lieve that better spatial segmentation schemes within the space-frequency framework can be found and leave this issue for future research.The universality and robustness of our algorithms appears to be the most attractive feature,compared to other state-of-the-art image coding algorithms which perform very well for a certain class of images but somewhat worse for other classes of images.6.REFERENCES[1]M.Vetterli and J.Kova˘c evi´c,Wavelets and Subband Cod-ing,Prentice-Hall,Englewood Cliffs,NJ,1995.[2]R.Coifman,Y.Meyer,S.Quake,and V.Wickerhauser,“Signal Processing with wave packets,”Numerical Algo-rithms Research Group,Yale University,1990.[3]K.Ramchandran and M.Vetterli,“Best Wavelet PacketBases In A Rate-Distortion Sense,”IEEE Trans.Image Proc.,Vol.7,No6,pp.June1998.[4]Z.Xiong,K.Ramchandran and M.T.Orchard,“WaveletPacket Image Coding Using Space-Frequency Quantiza-tion,”IEEE Trans.Image Proc.,2(2):160-175,April 1993.[5] C.Herley,Z.Xiong,K.Ramchandran and M.T.Or-chard,“Joint Space-Frequency Segmentation Using Bal-anced Wavelet Packet Trees for Least-Cost Image Repre-sentation,”IEEE Trans.Image Proc.,Vol.6,No9,pp.1213-1230,September1997.[6]Z.Xiong,K.Ramchandran and M.T.Orchard,“Space-Frequency Quantization for Wavelet Image Coding,”IEEE Trans.Image Proc.,Vol.6,No5,pp.677-693,May1997.[7]S.LoPresto,K.Ramchandran,and M.Orchard,“ImageCoding based on Mixture Modeling of Wavelet Coefficients and a fast Estimation-Quantization Frameqork,”In Pro-ceedings of the Data Compression Conference,Snowbird UT,1997.[8]I.Daubechies,“Ten Lectures on Wavelets,”SIAM,Philadelphia,PA,1992.[9]I.Daubechies,“The wavelet transform,time-frequency lo-calization and signal analysis,”IEEE rm.Th., Vol.36,pp.961-1005,September1990.[10]R.L.Joshi,H.Jafarkhani,J.H.Kasner,T.R.Fischer,N.Farvardin,M.W.Marcellin,and R.H.Bamberger,“Com-parison of different methods of Classif.in subband coding of images,”IEEE Trans.Image Proc.,Vol.6,No11,pp.1473-1486,November1997[11] C.Chrysafis and A.Ortega,“Efficient context-based en-tropy coding for lossy wavelet image compression,”DCC, Data Compression Conference,Snowbird,UT,1997. [12]I.Balasingham, A.Fuldseth,and T.A.Ramstad,“OnOptimal Tiling of the Spectrum in Subband Image Com-pression,”In Proceedings of the IEEE International Con-ference on Image Processing,Santa Barbara,CA,1997.[13] A.Fuldseth,I.Balasingham and T.A.Ramstad,“EfficientCoding of the Classif.Table in Low Bit Rate Subband Im-age Coding by Use of Hierarchical Enumeration,”In Pro-ceedings of the IEEE International Conference on Image Processing,Santa Barbara,CA,1997.[14] A.Said,W.A.Pearlman,“A new fast and efficient imagecodec based on set of partitioning in hierarchical trees”, IEEE Trans.on Circuits and Systems for Video technol-ogy,vol.6,no.3,pp.243-250,June1996.[15]UCLA Image Communications Lab:PSNR Results,/˜ipl/psnr results.html1.Grow the full tree up to a certain depth andfind the transform coefficients at each nodeof the tree.e the EQ coder[?]at each node to generatethe rate and distortion values for thatparticular node given a quality factor,thereby computing the cost for that node.(The cost for a particular node is expressedas Rate+QualityF actor∗Distortion.)3.Initialize i←maximumdepth.At depthi,assign the minimum cost as the node’soriginal cost which was found by the EQcoder.4.i←i−1,if i<0go to step7.5.For all the nodes at depth i,make the SPLITdecision:if(Original Cost of The Parent Node<Sum ofThe Minimum Costs of Children)thenDO NOT SPLIT the parent node and assign theminimum cost of the parent as its originalcost which was found by the EQ coder,storethe split decision.elseSPLIT the parent node.Assign the minimumcost of the parent as the sum of the minimumcosts of its children in case of WP-EQ,whereas in case of SF-EQ the the minimum costof the parent is the smallest of‘‘the sum ofthe minimum costs of its frequency children’’and‘‘the sum of the minimum costs of itsspace children’’.Store the split decisionaccordingly.6.Go to Step 4.7.The split decisions at each node of the treeare found for a certain quality factor in arate-distortion ing these splitdecisions fix the optimal tree structureand quantize the transform coefficients atthe nodes where there should be no split.Produce the bits from the quantized data.(Quantization and production of the bits isagain done by the EQ coder[?]).Table1:Pseudocode Description of the Encoder Part of the Wavelet Packet-EQ algorithmRate(bpp) 1.000.500.25LENASubband Based Classif.41.14dB37.69dB34.31dB using VQ[?]Context Based[?]40.97dB37.52dB34.57dB Subband Based Classif.40.26dB37.18dB34.08dB with PVQ[?][?]Wavelet tree-EQ40.88dB37.69dB34.57dB (parent on)[?]Said-Pearlman[?]40.46dB37.21dB34.11dB WP-EQ(parent off)40.80dB37.66dB34.49dB SF-EQ(parent off)40.87dB37.70dB34.55dBBARBARASubband Based Classif.—–—–—–using VQ[?]Context Based[?]37.61dB32.64dB28.75dB Subband Based Classif.38.35dB33.89dB30.12dB with PVQ[?][?]Wavelet tree-EQ37.65dB32.87dB28.48dB (parent on)[?]Said-Pearlman[?]36.47dB33.07dB29.36dB WP-EQ(parent off)38.51dB33.87dB30.00dB SF-EQ(parent off)38.57dB33.89dB30.07dBGOLDHILLSubband Based Classif.—–—–—–using VQ[?]Context Based[?]36.90dB33.53dB30.80dB Subband Based Classif.—–33.46dB30.86dB with PVQ[?][?]Wavelet tree-EQ36.96dB33.44dB30.76dB (parent on)[?]Said-Pearlman[?]36.55dB33.19dB30.70dB WP-EQ(parent off)37.11dB33.61dB30.95dB SF-EQ(parent off)37.22dB33.70dB31.05dB Table2:Performances of the best wavelet and subband coding based image coders compared to our image coder algorithms’performances.The PSNR value is found from the distortion between the original image and the decoded image at different bit rates。

基于小波去噪与改进Canny算法的带钢表面缺陷检测

基于小波去噪与改进Canny算法的带钢表面缺陷检测

现代电子技术Modern Electronics TechniqueFeb. 2024Vol. 47 No. 42024年2月15日第47卷第4期0 引 言带钢是钢铁工业的主要产品之一,广泛应用于机械制造、航空航天、军事工业、船舶等行业中。

然而在带钢的生产制作过程中,由于受到原材料、生产设备、工艺流程等多种因素的影响,不可避免地会导致带钢表面出现缺陷,例如:氧化、斑块、裂纹、麻点、夹杂、划痕等。

表面缺陷不仅影响带钢的外观,更是损害了产品的耐磨性、抗腐蚀性和疲劳强度等性能,因此需要加强产品的质检,对有表面缺陷的带钢进行检测和筛查。

但传统人工检测方法采用人为判断,随机性较大、检测置信度偏低、实时性较差[1]。

卞桂平等提出一种基于改进Canny 算法的图像边缘检测方法,采用复合形态学滤波代替高斯滤波,并通过最大类间方差法选取高低阈值,最后利用数学形态学对边缘进行细化,提高了抗噪性能[2]。

刘源等提出一种DOI :10.16652/j.issn.1004⁃373x.2024.04.027引用格式:崔莹,赵磊,李恒,等.基于小波去噪与改进Canny 算法的带钢表面缺陷检测[J].现代电子技术,2024,47(4):148⁃152.基于小波去噪与改进Canny 算法的带钢表面缺陷检测崔 莹, 赵 磊, 李 恒, 刘 辉(昆明理工大学 信息工程与自动化学院, 云南 昆明 650500)摘 要: 针对带钢表面图像亮度不均匀、对比度低以及缺陷种类多、形式复杂的问题,提出一种基于小波去噪与改进Canny 算法的带钢表面缺陷检测算法。

首先通过小波变换将原始图像分解,对低频分量采用改进的同态滤波提高亮度和对比度,对高频分量采用改进的阈值函数进行去噪,并通过小波重构得到增强图像。

其次对传统Canny 算法进行改进,通过改进的自适应加权中值滤波进行平滑,并增加梯度方向模板;然后采用迭代式最优阈值选择法与最大类间方差法来求取高低阈值,提高算法的自适应性。

二维采样与重建的方法

二维采样与重建的方法

二维采样与重建的方法
二维采样与重建的方法主要包括以下两种:
1. 探测物体的投影数据,然后使用这些数据来重建物体的实际内部构造。

这种方法基于图像重建的一般思想,即通过解方程组来找出矩阵中的元素。

例如,可以设那些矩阵元素为未知数,然后列出一个线性方程组,解这个方程组便可以得到答案。

2. 使用二维函数 f(x, y) 的傅里叶变换F(ωx, ωy) 沿与探测器平行的方向过
原点的片段,来重建物体的实际内部构造。

这种方法通常被称为处理FBP (Filtered Backprojection,先滤波后反投影)算法。

以上方法仅供参考,建议查阅关于二维采样与重建的资料或者咨询相关专家,获取更准确的信息。

编码激励成像算法

编码激励成像算法

编码激励成像算法
编码激励成像算法(Coded Aperture Imaging)是一种用于重建高分辨率图像的成像技术。

它通过使用特殊编码模式对光线进行编码,从而实现对低分辨率图像进行重建。

该算法的基本原理是,在光学系统的输入面上放置一个特殊的编码掩模,将其与场景中的光线进行交互。

编码掩模通常由具有高对比度的二值模式构成,如伪随机数模式或哈达玛模式。

当光线通过编码掩模时,它们会被模式中的二值信息所调制。

在光学系统的输出面上,使用一个底片或传感器来记录经过编码的光线。

然后,利用逆问题求解方法对记录的数据进行处理,就可以重建出原始场景的高分辨率图像。

编码激励成像算法的优势在于,它可以通过简单的光学系统和低分辨率传感器来获取高分辨率图像。

相比传统的高分辨率成像技术,它具有成本低、系统简单、易于实现的特点。

因此,在一些特定的应用领域,如航天遥感、医学成像等,编码激励成像算法具有广泛的应用前景。

然而,编码激励成像算法也存在一些挑战和限制。

首先,由于编码掩模的存在,它会引入图像重建时的伪影和噪声。

其次,编码掩模的设计对图像重建的质量有着重要的影响,需要针对具体的应用场景进行设计优化。

最后,编码激励成像算法在快速移动场景中可能会导致失真或信息丢失,对于需要高速成像的应用不适用。

总的来说,编码激励成像算法是一种有潜力的高分辨率成像技术,但在实际应用中需要克服一些挑战。

未来的研究可以致力于优化编码掩模的设计,提高图像重建的质量和稳定性,以扩展该算法的应用范围。

一种非下采样轮廓波变换域水果图像预处理方法

一种非下采样轮廓波变换域水果图像预处理方法

一种非下采样轮廓波变换域水果图像预处理方法许健才【期刊名称】《江苏农业科学》【年(卷),期】2015(000)011【摘要】水果图像在获取过程中受到拍摄系统自身的缺陷、复杂多变的成像环境等多重因素的限制,导致图像被掺杂进一些噪声,降低了图像的清晰度。

结合非下采样轮廓波变换(nonsubsampled contourlet transform,NSCT),提出了该类图像的有效预处理方法。

该方法首先对图像进行多尺度 NSCT 分解,获得低频分解系数和高频分解系数;然后对低频分解系数进行模糊增强处理,对高频系数采用二维多级中值滤波算法(two -dimensional multi -stage median filtering algorithm)进行处理;最后进行分解系数重构,获得清晰度较高的水果图像。

分别将此算法与已有的几类同类型算法对水果图像进行去噪处理,并引入峰值信噪比(peak signal noise to ratio,PSNR)作为处理效果评价指标,结果表明,此算法性能明显优于已有的同类型算法。

【总页数】3页(P499-501)【作者】许健才【作者单位】广州城市职业学院信息技术系,广州广东 510405【正文语种】中文【中图分类】S126;TP391【相关文献】1.应用图像纹理连续性的4f系统图像非下采样轮廓波变换域降噪 [J], 徐鑫;田逢春;陈建军;姬艳丽2.基于最大后验和非局域约束的非下采样轮廓波变换域SAR图像去噪方法 [J], 岳春宇;江万寿3.一种小波域改进双边滤波的水果图像去噪算法 [J], 刘炳良4.基于非下采样轮廓波变换的全色图像与多光谱图像融合方法研究 [J], 傅瑶;孙雪晨;薛旭成;韩诚山;赵运隆;曲利新5.侧扫声纳图像非下采样轮廓波变换域分区增强方法 [J], 武鹤龙;邱政;张维全因版权原因,仅展示原文概要,查看原文内容请购买。

一种有效的小波域四叉树分形图象编码方案(英文)

一种有效的小波域四叉树分形图象编码方案(英文)

一种有效的小波域四叉树分形图象编码方案(英文)
高西奇;洪波;张辉;何振亚
【期刊名称】《东南大学学报:英文版》
【年(卷),期】1998(000)001
【摘要】基于Davis提出的分形图象编码的小波理论,本文提出一种有效的
小波域四叉树分形图象编码方案.在此方案中,利用小波系数的零树减少域块的个数,以降低表示分形编码局部信息所需要的比特代价;并对编码方案中采用的一组标量量化器、小波子树自量化器和判决树进行全面的熵约束最优化.实验结果表明:在低比特率下,编码方案的峰值信噪比性能比已报道的结果提高了约1dB.
【总页数】6页(P35-40)
【关键词】分形图象编码;小波变换;四叉树
【作者】高西奇;洪波;张辉;何振亚
【作者单位】东南大学无线电工程系
【正文语种】中文
【中图分类】TN919.8
因版权原因,仅展示原文概要,查看原文内容请购买。

基于四叉树分类的小波图像二维网络编码量化

基于四叉树分类的小波图像二维网络编码量化

基于四叉树分类的小波图像二维网络编码量化
纪中伟;郑勇;朱维乐
【期刊名称】《信号处理》
【年(卷),期】2002(018)005
【摘要】本文提出了一种小波图像在进行小波系数四叉树分类后运用二维网格编码量化(2D-TcQ)的新方法.首先根据小波图像中各子带系数的带间相关性对其进行四叉树分类,然后在扩展的二维码书空间对重要类系数进行网格编码量化,运用维特比算法寻找最优量化序列,最终形成有序的嵌入式编码比特流.仿真结果表明,该方法在相同编码率下与SPIHT算法相比,PSNR获得了0.4dB左右的改善;使用小一倍的码书,相同编码率下比四叉树分类后使用一维TCQ获得了0.1dB左右的改善.由于本方法可以采用较小的码书尺寸,计算量较小,所以适用于低存贮、低功耗的编解码环境.
【总页数】5页(P394-398)
【作者】纪中伟;郑勇;朱维乐
【作者单位】电子科技大学电子工程学院1603教研室,成都,610054;电子科技大学电子工程学院1603教研室,成都,610054;电子科技大学电子工程学院1603教研室,成都,610054
【正文语种】中文
【中图分类】TN911
【相关文献】
1.基于小波变换的分类量化图像编码算法 [J], 王向阳;杨红颖
2.基于树结构矢量分类的小波图像网络编码矢量量化 [J], 郑勇;周正华;朱维乐
3.基于方向树结构矢量分类的小波图像网格编码矢量量化 [J], 郑勇;蒋文军;杨文考;朱维乐
4.四叉树分类的网格编码量化在SAR图像中的应用 [J], 谢海慧;纪中伟;黄顺吉
5.基于零树分类的小波图像二维网格编码量化 [J], 纪中伟;郑勇;朱维乐
因版权原因,仅展示原文概要,查看原文内容请购买。

基于小波域方向树结构的图像水印算法

基于小波域方向树结构的图像水印算法

基于小波域方向树结构的图像水印算法
李春花;卢正鼎
【期刊名称】《计算机工程》
【年(卷),期】2007(033)019
【摘要】在小波变换域中,不同分辨率上、同方向子带内、相同空间位置的小波系数之间有较强的相似性,该文提出了一种基于小波域方向树结构的图像水印算法.该算法利用支持向量机建立小波系数方向树结构模型,通过调整模型的输出值与目标值的大小关系来隐藏水印.提取水印时,不需要原始载体图像.实验结果表明,该算法对常见的图像攻击具有很好的鲁棒性,而且水印嵌入容量大、隐蔽性好、安全性强、性能优越.
【总页数】3页(P132-133,149)
【作者】李春花;卢正鼎
【作者单位】华中科技大学计算机学院,武汉,430074;华中科技大学计算机学院,武汉,430074
【正文语种】中文
【中图分类】TP309
【相关文献】
1.基于小波零树结构的图像水印算法研究 [J], 饶智坚;常建平
2.基于小波域树结构MRF的医学图像分割 [J], 施宇;夏平;雷帮军;师冬霞
3.基于复小波域树结构化MRF模型的声纳图像分割 [J], 夏平;刘小妹;雷帮军;吴涛
4.基于图像小波树结构的数字水印算法 [J], 吴芳;芮国胜
5.基于小波域空间树结构的合成孔径雷达图像压缩 [J], 赵跃东;杨汝良
因版权原因,仅展示原文概要,查看原文内容请购买。

基于图像特征量化的感兴趣区域编码

基于图像特征量化的感兴趣区域编码

基于图像特征量化的感兴趣区域编码
李景超;罗建书
【期刊名称】《计算机工程与应用》
【年(卷),期】2007(043)011
【摘要】利用小波变换中多分辨率分析与人眼视觉匹配的特性,针对图像在小波域不同子带中的特点及重要性,提出了一种基于子带特征,结合带间与带内相关性进行混合量化编码的方法,支持渐进式传输,并在高压缩比下获得了视觉上较满意的高质量恢复图像.
【总页数】3页(P45-47)
【作者】李景超;罗建书
【作者单位】国防科技大学,理学院,数学与系统科学系,长沙,410073;国防科技大学,理学院,数学与系统科学系,长沙,410073
【正文语种】中文
【中图分类】TN919.81
【相关文献】
1.基于分布式信源编码和感兴趣区域编码的干涉多光谱图像压缩 [J], 孔繁锵;吴宪云
2.基于游程和扩展指数哥伦布编码的任意形状感兴趣区域图像编码 [J], 徐勇;徐智勇;张启衡
3.基于感兴趣区域的三维多描述量化编码 [J], 吴睿;徐向民;杨燕贤
4.基于FPGA的HEVC感兴趣区域编码算法研究与设计 [J], 李申;严伟;夏珺;崔正
东;柴志雷
5.基于感兴趣区域的高性能视频编码帧内预测优化算法 [J], 宋人杰;张元东
因版权原因,仅展示原文概要,查看原文内容请购买。

第二代Curvelet变换域的信息隐藏方法

第二代Curvelet变换域的信息隐藏方法

第二代Curvelet变换域的信息隐藏方法
王晨毅;王建军
【期刊名称】《太赫兹科学与电子信息学报》
【年(卷),期】2008(006)002
【摘要】提出了一种基于第二代Curvelet变换的信息隐藏方法.该方法以数字图像作为掩体介质,根据Curvelet子带内各向异性尺度关系选择部分系数,并对其实部和虚部分别进行量化嵌入.引入了误差控制编码以减少误码,在提取秘密信息时不需要参考原掩体介质,实现了密文的盲提取.与DCT域Jsteg算法和小波域量化索引调制算法进行比较,结果表明,提出的算法具有较高图像质量和更大的嵌入容量.
【总页数】6页(P105-110)
【作者】王晨毅;王建军
【作者单位】复旦大学,电子工程系,上海,200433;复旦大学,电子工程系,上
海,200433
【正文语种】中文
【中图分类】TN911.73
【相关文献】
1.基于第二代Curvelet变换的多聚焦图像融合方法研究 [J], 刘天钊
2.基于PCA的第二代Curvelet变换域图像降噪研究 [J], 刘鸿涛;王皓;汪金礼;尹涛;苏亚辉
3.第二代Curvelet变换压制面波方法 [J], 董烈乾;李振春;王德营;刘磊;高君微
4.第二代curvelet变换与区域能量的多聚焦图像融合方法 [J], 郭敏;任娜;高卫平
5.ARSIS概念下基于第二代Curvelet变换的遥感影像融合方法 [J], 彭凌星;朱自强;鲁光银
因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

II. Discrete Wavelet Transform Coding
A discrete wavelet transform (DWT) (c.f. Mallat [15]) decomposes an image into a collection of 3N+1 subband images, where N is the number of scales. Specifically, a one-dimensional low pass filter and a one-dimensional high pass filter, each based on a mother wavelet, are applied horizontally and vertically to the original image, and the outputs are subsampled by a factor of 2, to create 4 subband images, designated LowLow, Low-High, High-Low and High-High. The three subband images with a High component are considered to be at scale 1. The filtering and subsampling is repeated on the Low-Low subband, creating three subband images at scale 2, and a new Low-Low subband image. Such filtering and subsampling of Low-Low subband images continues until 3N+1 subband images are created, three at each scale and one, the baseband image, that is a low-pass filtered version of the original image. For more detailed discussions of wavelet decompositions, we refer the reader to [15-18]. If the original image is M×M, then the subband images at scale i are M2-i×M2-i, and the baseband image is M2-N×M2-N. For future reference, we mention that each pixel of the baseband is considered to have 22(N-i) descendants in each of the outer subbands at scale i. These pixels correspond to the same locations in the original image. A wavelet image coder is created by quantizing and encoding the pixels (often called coefficients) in the various subband images. A desirable feature of wavelet transforms for image coding is their tendency to create large coefficients in the baseband and small coefficients in the other, we will say outer, bands. Though important details such as edges are represented by outer band coefficients, relatively few of them need to be accurately represented. A variety of methods have been proposed for quantizing and encoding the
Abstract
A hybrid of wavelet and quadtree image coding is introduced. It uses a recently developed quadtree predictive coding technique, having good rate/fidelity performance and very low complexity, to encode the baseband wavelet coefficients. The resulting quadtree decomposition of the baseband is then used to guide the encoding of higher frequency subbands, by indicating which of their coefficients require encoding and which do not. The result is a coding method with performance comparable to those ofith less complexity.
I. Introduction
Subband image coders based on discrete wavelet transforms have produced some of the best, if not the best, image coding results (c.f. [1-7]) in the sense of good rate vs. fidelity characteristics (both PSNR and perceptual) with reasonable complexity. Aside from producing good subband decompositions, recent wavelet coders have also benefited substantially from clever ways of exploiting the similarity of subband images at one scale with those of another [2,4,5,7,8]. Though quadtree image coding has a longer history (c.f. [9-14]), generally speaking it has not produced coders that are as effective as the best wavelet coders. In recent work [14], the present authors demonstrated a predictive version of quadtree coding with significantly less complexity and better rate/fidelity performance than previous quadtree techniques or JPEG. Due to its predictive nature, it avoids the kind of blocking artifacts that have prevented other quadtree methods (and block based methods in general) from achieving good fidelity at low rates. However, at very low rates the predictive nature of the method tends to cause some blurring. So while the new quadtree method is less complex than wavelet coding, its perceptual quality is not as good at low rates, for example below .25 bpp.
QUADTREE-GUIDED WAVELET IMAGE CODING1
Chia-Yuan Teng David L. Neuhoff Department of Electrical Engineering and Computer Science The University of Michigan Ann Arbor, MI 48109
1Supported in part by NSF Grant NCR 9415754
In this paper, a new image coding technique that marries wavelet and quadtree predictive coding is proposed. From the point of view of wavelet coding, the quadtree predictive approach is an excellent way to encode the baseband. Moreover, the quadtree decomposition of the baseband provides a measure of local activity that can be used to guide the coding of the higher subbands, thereby exploiting the similarity between subbands at different scales in a simple and effective way. In fact, the new technique works well with only one- or two-scale wavelet decomposition. Thus, it is simpler than other wavelet coding approaches. From the point of view of quadtree coding, the blurring caused by low rate coding is avoided because the quadtree is applied to only the baseband -- the details are coded in a different fashion. Though a wavelet decomposition must be performed, the method is not much more complex than the original quadtree method, because only a one- or two-scale decomposition is needed, the quadtree method is applied only to the baseband, and simple methods are used to encode the remaining subbands. The paper is organized as follows. A few facts about discrete wavelet transforms are given in Section II. Quadtree coding and the quadtree predictive method are briefly summarized in Section III. Section IV describes the new hybrid of wavelet and quadtree methods. Experimental results, including comparisons with wavelet coding, quadtree predictive coding and JPEG, are given in Section V. Concluding remarks are contained in Section VI.
相关文档
最新文档