傅里叶,小波分析,CS简介及其应用(英文版)

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1) Fourier Analysis
Fourier analysis, also known as harmonic analysis, is a branch of mathematics field. The Fourier analysis can show how a function or a signal is expressed as the superposition of the basic waveform.
Fourier analysis is used to analyze non-sinusoidal periodic signals of a mathematical method.
In the field of signal analysis, we can use Fourier series representation of the signal, to analyze the characteristics of periodic signals in the frequency domain, and to establish the signal spectrum.
If the periodic signal f 1 (t) satisfy the Dirichlet conditions, we can make x (t) expand into triangular form of Fourier series.
The Dirichlet conditions are:
f(x) must have a finite number of extrema in any given interval f(x) must have a finite number of discontinuities in any given interval f(x) must be absolutely integrable over a period. f(x) must be bounded
direct current component
T
πω21=
fundamental frequency
0c a n = direct current range
2
2n n n b a c += harmonic amplitude
)
sin cos ()(11101t n b t n a a t f n n n ωω++=∑∞
=
⎰+=10
).(11
0T
t t dt t f T a ∑

=++
=1
101)
cos()(n n n t n c c t f ϕϖ
n
n n a b arctan
-=ϕ harmonic phase
And in the Frequency domain analysis of the system. We can obtain the equation like this: )
()()(w X w Y w H =
System function
)(w X Stimulus spectrum
)(w Y Spectrum of response signal
)(w H System function
For instance: )()(2)(3
)(22
t x t q dt
t dq dt
t q d =++
we do the fourier transform, and we obtain the )(w H :
)(w H =
2
)(3)(1
2
++jw jw
But the forier analysis has some limitations:
The Fourier coefficients are constant and do not change with time t, which can only deal with the same spectral components of the stationary signal, on the contrary, in dealing with non-stationary signals will make a big error, even with the actual situation very different. For example: no damping and damped free vibration of single degree of freedom, playing swing, clock, chorus, etc.
In the actual signal, if the high-frequency and low frequency vary greatly, at the same time interval, high frequency signal attenuation of the attenuation of low frequency signal is not, therefore, at different times, the signal spectral components are different. Insist on all the time with a Fourier transform to identify the spectral components,
deliberately changing the amplitude of the change with frequency to compensate for not only high-frequency Fourier coefficients of error, low-frequency Fourier coefficients are significant errors, including the frequency obtained Of course there are errors.
Seeking full-time Fourier coefficient is the weighted average domain. So it means, mutation information is the local average out, the local mutation is difficult to reflect the role of information (like eating the same big pot, egalitarianism). Very different signals, such as square wave, triangle wave, sine wave, can get the same frequency, so the processing, to capture transient signals such as the fault signal, the sensitivity is poor. Treatment should be used to capture transient signals reflect the transformation of local information.
2)Wavelet Analysis
Wavelet transforms are broadly divided into three classes: continuous, discrete and multiresolution-based.
Continuous wavelet transforms (continuous shift and scale parameters):
The subspace of scale a or frequency band is generated by the functions (sometimes called child wavelets):
,
where a is positive and defines the scale and b is any real number and defines the shift.
The pair (a,b) defines a point in the right half-plane .
The projection of a function x onto the subspace of scale a then has the form:
with wavelet coefficients:
.
For the analysis of the signal x, one can assemble the wavelet coefficients into a scalogram of the signal.
Discrete wavelet transforms (discrete shift and scale parameters):
∑∑

-∞=∞
-∞
=><=
j k k
j k
j t f t f )(~,)(,,ψψ =
∑∑∞
-∞=∞
-∞
=j k k
j k
j t d
)(,,ψ
)2(,k t j k
j -=ψψ
Among them,
)(t ψ is the wavelet function, k
j d
, is the wavelet coefficients, and
k
j d
, =><
k
j f ,~,ψ
Wavelet series is a double sum index of wavelet coefficients not only the frequency of the target, but also the time of the index. In other words, not only as the Fourier coefficients of the wavelet coefficients, as is varies with frequency, and for the same frequency index at different times, the wavelet coefficients are different.
Because the wavelet function with compact support of nature, that is zero outside a certain range. So that the level of demand at different times of the frequency of the wavelet coefficients, only use local information near the time.
An important application of wavelet analysis is using the two-dimensional wavelet analysis to do the image compression.
It is characterized by high compression ratio and high compression speed. And it is can maintain essentially the same characteristics after the compression. Moreover, in the transmission process can resist interference.
Here is a two-dimensional image signal, using two-dimensional wavelet analysis to do the compression of the image. After the image is decomposed by the wavelet analysis, it can obtain a series of sub-images with different resolution. Different resolutions of the sub-images correspond to different frequencies.
image
%imput the image figure,the image file is wbarb.mat
load wbarb;
%show the figure
subplot(221);image(X);colormap(map)
title('Original image');
axis square
disp('The size of the image before compression X:');
whos('X')
%use the wavelet bior3.7 to do the second layer wavelet decomposition
[c,s]=wavedec2(X,2,'bior3.7');
%extract coefficients of the first layer with low-frequency and high
%frequency coefficients from the structure of wavelet decomposition.
cal=appcoef2(c,s,'bior3.7',1);
ch1=detcoef2('h',c,s,1);
cv1=detcoef2('v',c,s,1);
cd1=detcoef2('d',c,s,1);
%reconstruction of the various frequencies
a1=wrcoef2('a',c,s,'bior3.7',1);
h1=wrcoef2('h',c,s,'bior3.7',1);
v1=wrcoef2('v',c,s,'bior3.7',1);
d1=wrcoef2('d',c,s,'bior3.7',1);
c1=[a1,h1;v1,d1];
%show the various frequencies after the decomposition.
subplot(222);image(c1);
axis square
title('Decomposition of low and high frequency information');
%do the compression of the image.
%keep the first layer of low-frequency information of wavelet decomposition
%and do the compression of the figure.
%the first layer of low-frequency information is the cal,and display the
%first layer of low-frequency information.
%first,do the quantization coding for the first layer i nformation.
cal=appcoef2(c,s,'bior3.7',1);
cal=wcodemat(cal,440,'mat',0);
%change the figure height
cal=0.5*cal;
subplot(223);image(cal);colormap(map);
axis square
title('The first compression');
disp('The size of the first compression: ');
whos('cal')
%reserve the second layer of low-frequency information of wavelet
%decomposition,do the compression of the image,and the compression ratio %is more higher.
%the second layer of low-frequency information is ca2,display the second %layer of low-frequency information.
ca2=appcoef2(c,s,'bior3.7',2);
%do the quantization coding for the second layer.
ca2=wcodemat(ca2,440,'mat',0);
%change the figure height
ca2=0.25*ca2;
subplot(224);image(ca2);colormap(map);
axis square
title('The second compression');
disp('The size of the second compression: ');
whos('ca2')
The result is:
The size of the image before compression X:
Name Size Bytes Class
X 256x256 524288 double array Grand total is 65536 elements using 524288 bytes The size of the first compression:
Name Size Bytes Class
cal 135x135 145800 double array Grand total is 18225 elements using 145800 bytes The size of the second compression:
Name Size Bytes Class
ca2 75x75 45000 double array
Grand total is 5625 elements using 45000 bytes
Original image
50100150200250
Decomposition of low and high frequency information
100200300400500
The first compression
20406080100120
The second compression 10203040506070
3)Compressed Sensing
Compressed sensing, also known as compressive sensing, compressive sampling and sparse sampling, is a technique for finding sparse solutions to underdetermined linear systems. In engineering, it is the process of acquiring and reconstructing a signal utilizing the prior knowledge that it is sparse or compressible.
The main idea behind compressed sensing is to exploit that there is some structure and redundancy in the majority of interesting signals—they are not pure noise. In particular, most signals are sparse, that is, they contain many coefficients close to or equal to zero, when represented in some domain. (This is the same insight used in many forms of lossy compression.)
Compressed sensing typically starts with taking a weighted linear combination of samples also called compressive measurements in a basis different from the basis in which the signal is known to be sparse. The results found by David Donoho, Emmanuel Candès, Justin Romberg and Terence Tao showed that the number of these compressive measurements can be small and still contain all the useful information. Therefore, the task of converting the image back into the intended domain involves solving an underdetermined matrix equation since the number of compressive measurements taken is smaller than the number of pixels in the full image. However, adding the constraint that the initial signal is sparse enables one to solve this underdetermined system of linear equations.
The classical solution to such problems is to minimize the L2 norm—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only a matrix multiplication by the pseudo-inverse of the basis sampled in). However, this leads to poor results for most practical applications, as the unknown coefficients seldom have zero energy.
In order to enforce the sparsity constraint when solving for the underdetermined system of linear equations, one should be minimizing the L0 norm, or equivalently maximizing the number of zero coefficients in the new basis. However, searching a solution with this constraint is NP-hard (it contains the subset-sum problem), and so is computationally infeasible for all but the tiniest data sets. Following Tao et al., where it was shown that the L1 norm is equivalent the L0 norm, leads one to solve an easier problem. Finding
the candidate with the smallest L1 norm can be expressed relatively easily as a linear program, for which efficient solution methods already exist. These solution methods have been refined over the past few years yielding enormous gain.
A more technical insight on the different techniques employed in sampling and decoding signals with compressive sensing can be gained in.
The field of compressive sensing has direct connections to Underdetermined Linear Systems, Group Testing, Heavy-hitters, Sparse Coding, Multiplexing, Sparse Sampling, Finite Rate of Innovation. Imaging techniques having a strong affinity with compressive sensing include Coded aperture and Computational Photography. As a generic rule of thumb, any two stage techniques or indirect imaging involving the use of a computer for the reconstruction of a signal or an image is bound to find a use for compressive sensing techniques.
Starting with the famous single-pixel camera from Rice University an up-to-date list of the most recent implementations of compressive sensing in hardware at different technology readiness level is listed in. Some hardware implementation like the one used in MRI or Compressed Genotyping do not require an actual physical change whereas other hardware require substantial re-engineering to perform this new type of sampling. Similarly, a number of hardware implementation already existed before 2004 but while they were acquiring signals in a compressed manner, they generally did not use compressive sensing reconstruction techniques to reconstruct the original signal. The result of these reconstruction were suboptimal and have been greatly enhanced thanks to compressive sensing. Hence there is a large disparity of implementation of compressive sensing in different areas of engineering and science.
References:
1, Hayes, Brian, The Best Bits, American Scientist, July 2009
2, Donoho, D. L., Compressed Sensing, IEEE Transactions on Information Theory, V. 52(4), 1289–1306, 2006 [1]
3, Candès, E.J., & Wakin, M.B., An Introduction To Compressive Sampling, IEEE Signal Processing Magazine, V.21, March 2008 [2]
Etc.。

相关文档
最新文档