压缩采样(Compressive sampling)陶哲轩
压缩感知介绍
f 2 ( x)
Minmum
Minmum
f1 (x)
压缩感知稀疏优化原理示意图
Two-Objective Minimum Problem:
Minmum
f (x) ( f1 (x), f 2 (x))
Pareto前沿
f 2 ( x)
Minmum
f1():k f2():err
University of illinois
压缩感知应用实例十—动态CT图像 重建
压缩感知介绍
压缩感知介绍
1.压缩感知简介。
2.压缩感知的优势。
3.压缩感知稀疏优化原理示意图。
4.压缩感知应用条件。
5. 压缩感知应用。
背景
信息技术飞速发展:信息需求量剧增。
带宽增加:采样速率和处理速率增加。
压缩感知的发现者
伊曼纽尔· 坎迪斯: “这就好像,你给 了我十位银行账号 的前三位,然后我 能够猜出接下来的 七位数字。” 华裔数学家陶哲 轩
压缩感知介绍
1.压缩感知简介。 2.压缩感知的优势。
3.压缩感知稀疏优化原理示意图。
4.压缩感知应用条件。
5. 压缩感知应用。 6.工作中可能结合之处。
压缩感知稀疏优化原理示意图
Two-Objective Minimum Problem:
Minmum
f (x) ( f1 (x), f 2 (x))
压缩感知介绍
1.压缩感知简介。
2.压缩感知的优势。
3.压缩感知稀疏优化原理示意图。
4.压缩感知应用条件。
5. 压缩感知应用。
传统的数据压缩与压缩感知
采集
压缩
解压
直接采集压缩后的数据
压缩感知
压缩感知压缩感知是近年来极为热门的研究前沿,在若干应用领域中都引起瞩目。
关于这个题目,松鼠会已经翻译了两篇文章,一篇来自于压缩感知技术最初的研究者陶哲轩(链接),一篇来自威斯康辛大学的数学家艾伦伯格(本文正文)。
这两篇文章都是普及性的,但是由于作者是专业的研究人员,所以事实上行文仍然偏于晦涩。
因此我不揣冒昧,在这里附上一个画蛇添足的导读,以帮助更多的读者更好了解这个新颖的研究领域在理论和实践上的意义。
压缩感知从字面上看起来,好像是数据压缩的意思,而实则出于完全不同的考虑。
经典的数据压缩技术,无论是音频压缩(例如mp3),图像压缩(例如jpeg),视频压缩(mpeg),还是一般的编码压缩(zip),都是从数据本身的特性出发,寻找并剔除数据中隐含的冗余度,从而达到压缩的目的。
这样的压缩有两个特点:第一、它是发生在数据已经被完整采集到之后;第二、它本身需要复杂的算法来完成。
相较而言,解码过程反而一般来说在计算上比较简单,以音频压缩为例,压制一个mp3 文件的计算量远大于播放(即解压缩)一个mp3 文件的计算量。
稍加思量就会发现,这种压缩和解压缩的不对称性正好同人们的需求是相反的。
在大多数情况下,采集并处理数据的设备,往往是廉价、省电、计算能力较低的便携设备,例如傻瓜相机、或者录音笔、或者遥控监视器等等。
而负责处理(即解压缩)信息的过程却反而往往在大型计算机上进行,它有更高的计算能力,也常常没有便携和省电的要求。
也就是说,我们是在用廉价节能的设备来处理复杂的计算任务,而用大型高效的设备处理相对简单的计算任务。
这一矛盾在某些情况下甚至会更为尖锐,例如在野外作业或者军事作业的场合,采集数据的设备往往曝露在自然环境之中,随时可能失去能源供给或者甚至部分丧失性能,在这种情况下,传统的数据采集-压缩-传输-解压缩的模式就基本上失效了。
压缩感知的概念就是为了解决这样的矛盾而产生的。
既然采集数据之后反正要压缩掉其中的冗余度,而这个压缩过程又相对来说比较困难,那么我们为什么不直接「采集」压缩后的数据?这样采集的任务要轻得多,而且还省去了压缩的麻烦。
陶哲轩的PPT(1):线性代数Ax=b的求解以及压缩感知
Introduction Compressed sensing Variants Applications
# x
Ax=b
A low-dimensional example of a least-squares guess.
Terence Tao
Terence Tao Compressed sensing
Introduction Compressed sensing Variants Applications
Sparsity helps!
Intuitively, if a signal x ∈ Rn is S -sparse, then it should only have S degrees of freedom rather than n. In principle, one should now only need S measurements or so to reconstruct x , rather than n. This is the underlying philosophy of compressive sensing: one only needs a number of measurements proportional to the compressed size of the signal, rather than the uncompressed size. An analogy would be with the classic twelve coins puzzle: given twelve coins, one of them counterfeit (and thus heavier or lighter than the others), one can determine the counterfeit coin in just three weighings, by weighing the coins in suitably chosen batches. The key point is that the counterfeit data is sparse.
Lasso问题与LARS算法
c = XT (y − µA) or c j = ⟨x j, y − µA⟩
(2-20)
为当前各自变量与因变量的相关。若设
C = max{|c j|}
j
(2-21)
为当前最大相关,则根据假定,A中自变量与y的相关都为最大相关,即
⎧ ⎪⎪⎪⎪⎨|c j| = C for j ∈ A
⎪⎪⎪⎪⎩|c j| < C for j A
Figure 2.1中间一幅图给出了这种情况。初始时y与x1的相关度较高,由于 每次只逼近一小步 ,因此需要多次用x1进行逼近,直到x2的相关度变得比x1更 大(临界点处y′在x1和x2的角分线上)。之后交替用x2和x1进行逼近,一直达到 (或非常接近于)因变量y。
这种算法能够给出最优解,且 越小,给出的解就越精确,但算法的复杂度 也随 的减小而增大。一般情况下,为了保证精度, 往往取得很小,因此算法 的复杂度往往很高。
yk = ⟨x, φk⟩, k = 1, 2, . . . , n
(2-3)
5
其中x ∈ Rm为待测信号,φk为采样函数(传统时域-频域采样中即为傅里叶级 数), ⟨·, ·⟩表示内积。
如果采样函数是线性的,(2-3)式可写作
yn×1 = Φn×m xm×1
(2-4)
其中Φ ∈ Rn×m为采样矩阵。压缩采样中,我们希望n ≪ m,从而进行大幅度的 数据压缩。采样后得到结果y ∈ Rn,我们希望能完好地恢复被测信号x ∈ Rm,即 数据还原。
2.4.2 代数定义
前面的几何分析虽然直观,但高维向量“角分线”的定义需要用具体的代 数算式给出。此外每次逼近的步长(即何时下一个自变量的相关度增大到与正 在计算的自变量相关度相同)也需要计算得到。
基于压缩感知的时频分析方法
• 200 •基于压缩感知的时频分析方法以时频表示的稀疏性为前提。
通过构造压缩感知的欠定方程和重构算法,它达到了了高度的聚集性,低计算量和对时频表示丢失数据的考虑。
大多数现有工作仅限于基本欠定方程构造阶段和传统重构算法的使用。
信号模型,重构算法设计和信号自适应稀疏表示都缺乏深入的研究。
为了解决上述问题,建立了非平稳信号的时频表示的一般先验模型。
在信噪比低,缺乏先验知识的情况下,设计了性能好,复杂度低的时频表示重构算法,以获得一种时频聚集度高,计算量小,自适应和有对缺失数据的考虑的时频分析方法。
1.前言信号采集,信号描述和信号处理是现代科学研究和实际应用中不可或缺的重要组成部分。
在信号处理中,时频分析是非平稳信号的有力工具(Boashash,Boualem,Azemi,Ghasem,O’Toole,John M.Time-Frequency Processing of Nonstationary Signals:IEEE Signal Processing Magazine,2013;张傲.基于稀疏快速傅里叶变换的时频分析技术研究:西南交通大学,2018),因此引起了广泛的关注。
时频分析将一维时域信号映射到二维频率平面,构建时间和频率的联合函数。
它不仅可以表示时频域中信号能量的局部变化,还可以用于提取信号的频率特性。
它已被广泛应用于许多研究领域。
频谱分析研究对于有效分析地震波和导波信号,生物信号,机械信号,雷达信号和语音信号等非平稳信号具有重要意义。
随着时频分析技术和相关信号分析方法的发展,传统的时频表示有了很多改进和新的时频表示方法。
根据它们的不同具体表现形式,它们可以分为线性信号时域变换和非线性信号时频表示,自适应时频分析和基于经验模式分解的时频分析方法。
线性时频变换使用基函数实现信号的分解。
它的基函数具有特定的时频局部特征,因此会产生不同的时频表示。
目前线性时频变换的发展主要体现在窗函数的构造和时频基函数的设计和尺度变换上(P.Balazs,Senior Member.Adapted and Adaptive Linear Time-Frequency Representations:A Synthesis Point of View:IEEE Signal Processing Magazine,2013)。
压缩感知的天文图像去噪算法
压缩感知的天文图像去噪算法张杰;朱奕;史小平【摘要】针对压缩感知迭代收缩阈值算法在图像处理中存在收敛速度慢和去噪性能差的缺陷,提出了一种改进的高性能迭代收缩阈值天文图像去噪重建算法.首先,使用经典最速下降法中的BB线性搜索步长算子加快迭代收缩阈值算法的收敛速度;其次,为了进一步提高重构天文图像的质量,在传统VisuShrink收缩阈值的基础上,提出一种下降VisuShrink收缩阈值对图像信息进行筛选;由于阈值去噪方法在迭代重建的过程中会导致重建的图像中出现伪吉布斯效应,最后采用循环平移的方法在每次迭代过程中对获取的重建图像进行调整.多次的试验结果表明,与传统的压缩感知迭代收缩阈值算法相比,所提出的算法不仅能够获得较优的去噪性能和较快的收敛速度,同时可以有效地保护天文图像的特征和纹理等细节信息.此外,当选取的压缩采样比较低时,本算法也可以获得相对较高的峰值信噪比和视觉质量,进一步验证了本算法在天文图像去噪中的有效性.%In the deep space exploration, astronomical image acquisition, transmission and processing have always been the focus of research. To solve problems of slow convergence speed, poor denoising performance in compressed sensing iterative shrinkage-thresholding algorithm for image processing, an improved iterative shrinkage-thresholding astronomical image denoising and reconstruction algorithm with high performance is proposed. Firstly, the BB linear search stepsize of the classical steepest descent algorithm is used to accelerate the convergence speed of iterative shrinkage-thresholdingalgorithm;secondly, to further improve the reconstructed astronomical image quality, based on the classical VisuShrink shrinkage-threshold, adecreasing VisuShrink shrinkage-threshold is proposed to select the image information; since the pseudo-gibbs effect caused by threshold denoising method will appear in the process of image reconstruction, the cycle spinning method is finally employed to adjust the reconstructed image in each iteration. Multiple experimental results show that, compared with the traditional compressed sensing iterative shrinkage-thresholding algorithm, the algorithm proposed can not only obtain better denoising performance and faster convergence speed, but also effectively protect the astronomical image detail information, such as feature and texture. In addition, when compression sampling ratio is lower, the algorithm proposed also can obtain relatively higher peak signal to noise ratio and visual quality, proving the effectiveness of the algorithm proposed for astronomical image denoising.【期刊名称】《哈尔滨工业大学学报》【年(卷),期】2017(049)010【总页数】6页(P78-82,89)【关键词】收缩阈值;天文图像;去噪;压缩感知;循环平移【作者】张杰;朱奕;史小平【作者单位】哈尔滨工业大学控制与仿真中心,哈尔滨150080;哈尔滨工业大学控制与仿真中心,哈尔滨150080;哈尔滨工业大学控制与仿真中心,哈尔滨150080【正文语种】中文【中图分类】TN911.73利用天文图像对外太空进行研究是深空探索的一个重要分支.从获得的天文图像可以直接得知某一星体上的地表结构,是否存在未知生命等重要信息.为了能够获得更多的天文信息,天文图像的采集一般都是采用高分辨率CMOS/CCD传感器,但是卫星或者其他的深空探测设备所携带的存储空间有限,存储这些高分辨率图像将会给有限的存储空间带来较大压力.为解决这一难题,天文图像在存储之前通常都要经过压缩处理.然而常用的JPEG/JPEG-2000方法[1-3]很难获得较低的图像压缩采样比. 另一方面,由于环境、拍摄条件等因素的影响,天文图像在采集和传输的过程中经常受到噪声信号的干扰,在地面接收站接收到的天文图像通常含有噪声.从接收到的图像中很难分辨出某一区域的实际地形特征.因此在接收到这些天文图像后,通常要进行去噪处理.然而目前大部分的去噪方法经常由于很难从高维数的信号观测值中获得足够有效的图像信息,导致重构信号的质量较差[4-5].为有效解决高维数信号的重建问题,学者们一直在探索如何从高维数信号中获得一种低维的信号结构,同时保证原始信号能从这种低维结构中获得精确重建.即,从信号中提取重要的信息同时舍弃非重要信息,直接使用这些重要信息重建高维数信号.近几年提出的信号稀疏性[5-6]可能是探索信号低维结构的一种比较简单方法.基于信号的稀疏性,Donoho[7]提出了著名的压缩感知(compressed sensing,CS)理论.该理论一经提出,就引起了各领域的极大关注.它指出:如果信号在某一稀疏变换基上是稀疏的,则可以使用一个与该稀疏基不相关的低维测量矩阵对原信号进行观测,同时使用某一CS重建算法就可以从获得的少量信号观测值中精确重构原始信号.可以看出,信号在采集的同时就已经完成了压缩过程.传统的奈奎斯特/香农采样定理要求信号的采样速率必须大于或者等于两倍的信号带宽才能够精确重建原始信号,但是采样速率在低于两倍的信号带宽时,CS方法仍可以精确重建原始信号.因此,CS 理论可以有效地解决高维数信号的重建问题,本文将其应用到高分辨率天文图像去噪中.CS理论主要包含3个部分:稀疏变换、测量矩阵和重建算法.本文主要关注于如何设计一个高性能的重建算法.经过近几年的努力,学者们提出了许多的重建算法,例如线性规划类算法[8-9]、迭代收缩阈值类(iterative shrinkage thresholding,IST)算法[10-12]、梯度下降类方法[13]和贝叶斯类方法[14]等.在这些算法当中,IST算法设计简单且易于实现,更重要的是大部分的稀疏基都能较容易地应用到IST框架当中.这些优势使得IST算法经常受到学者们的青睐.然而该算法的收敛速度较慢,去噪性能也需要进一步提高.为了提高梯度下降法的收敛速度,Barzilai等[15]提出了著名的BB线性搜索步长,并取得较快的收敛效果.本文将其应用到IST算法中调节其收敛速度.在图像去噪中,通常采用阈值去噪方法对图像信息进行筛选,本文提出一种下降VisuShrink收缩阈值筛选天文图像信息.阈值去噪虽然设计简单且能获得较好的去噪效果,但在奇异点(如边缘或者纹理)附近会出现较大的幅值振荡,最终导致重构的图像出现伪吉布斯现象[16].循环平移方法[17-18]可有效地减小或者消除这种幅值振荡,提高重构图像质量.本文将其应用到IST算法中,在迭代过程中对重构的天文图像进行调整.基于上述技术,本文提出了高性能的IST改进算法.实验结果表明,该算法可以以较快的收敛速度重构一幅清晰的天文图像.当压缩采样比较低时,该算法也具有较好的重建性能.假设某一N×1信号x是K-稀疏的[6-7],则可以用一个低维非自适应矩阵(又称为测量矩阵)Ф∈ RM×N(M<<N)对原始信号x进行观测,进而得到低维含噪观测值y,可表示为从式(1)可以看出,由于M<<N,从y中重构出原始信号x是一个病态问题.然而,CS 理论指出如果信号x在某个正交基Ψ上是稀疏的,同时测量矩阵Ф满足RIP准则,则原始信号x可以获得高概率重构[3-5,13,19].这里可以通过求解线性规划的最优解问题重建原始信号.此时,式(1)可以改写为式中s=Ψ-1x为稀疏系数.这里Θ=ΦΨ可以作为CS测量矩阵直接对s进行观测.测量矩阵Ф要求与稀疏基Ψ不相关,两者越不相关,需要的测量次数就越少,则可以获得更低的信号压缩采样比.具有稀疏限制的l1范数最小化方法经常用来求解CS问题(2),即或者转化为对稀疏系数s的求解,可表示为式中第1项代表惩罚项,用来估算计算值与观测值之间的偏差;第2项为正则化项,表示原始信号的先验知识.经典的IST算法迭代过程可描述为式中:μk为线性搜索步长,为了计算方便通常设定为1,虽然简化了计算,但影响了算法的收敛速度;HS(·)表示阈值算子,通常考虑为硬阈值算子.其中式中T为阈值,本文考虑为VisuShrink阈值(通用阈值)σ,其中σ为噪声标准差.如果重构一幅N×N高维数图像,阈值T会随着N×N的增大而变得过大,在迭代过程中导致较多的细节特征信息丢失.本文提出了一种下降VisuShrink收缩阈值在迭代过程中对图像信息进行筛选,可表示为式中:q为迭代索引;Q为最大迭代次数.当q=0时,T为通用阈值.2.1 BB线性搜索步长最速下降法[20]的迭代过程可表示为式中:fk=f(xk)为任一目标函数Γ在位置xk处的梯度向量;μk>0为线性搜索步长,它要求满足以下条件:).令目标函数可以得到以下关系式和经典的SD迭代步长算子为为了获得高性能的步长算子,文献[15]用前一次的迭代信息设定当前迭代所使用的步长算子,同时修改了式(3)的迭代过程,即式中Dk=μkI.其中I为单位向量.为了保证它具有某种拟牛顿特性,需要满足以下任一限制条件:式中:sk-1=xk-xk-1,=fk-fk-1.基于上述条件,文献[15]提出了著名的BB步长算子,可表示为:文献[20]证明了BB步长算子比SD步长算子更能有效地提升最速下降法的收敛速度.本文将BB步长算子应用到IST算法中调节其收敛速度.2.2 循环平移方法在图像去噪的过程中,当一幅图像包含有较多奇异点时,将会遇到如下问题:对于某一个奇异点具有较好去噪效果的平移量可能对另一个奇异点去噪效果较差.因此,对于包含较多纹理和边缘特征的天文图像,就很难获得针对所有奇异点都具有较好去噪效果的最佳平移量.循环平移方法可有效解决这一难题,其循环平移过程可描述为式中K1、K2分别为沿行和列方向的最大平移量.在小波变换中,如果测试一幅N×N图像x且N=2K,则认为K1=K2=K为最大平移量.Ci,j(x)为定义的循环平移算子,对于二维图像x(m,n),可定义为C-i,-j(x)为Ci,j(x)的逆过程,可表示为此外,S(x)为小波变换过程,S-1(x)为小波逆变换.θ(x)为阈值函数,本文将其考虑为硬阈值函数HS(x),并且阈值T为下降VisuShrink收缩阈值.2.3 本文算法综上所述,本文算法的设计步骤概括如下:1)初始化过程.初始化迭代索引q=0,最大迭代次数Q=30和重构图像xq=0.2)计算下降VisuShrink收缩阈值和BB步长算子μq.3)使用下式更新估计值.4)对重构的图像进行循环平移,可表示为).6)q=q+1.如果q=Q,输出重构图像;否则,进入步骤2).本文考虑引入的噪声信号为高斯白噪声,同时为了验证本文算法的去噪性能,将它与传统IST算法、基于全变差方法的IST算法(IST-TV)[3]以及基于循环平移方法的IST算法(IST-CS)对比.本文的压缩采样比定义为:观测值数量/信号长度.首先测试大小为1 024×1 024的月球图像,并设定压缩采样比为0.3和噪声标准差图1显示了不同重构算法重构得到的结果.从图1可以看出,与其他算法相比,本文算法能获得较好的视觉质量和较高的峰值信噪比(PSNR),但花费的重构时间(reconstruction time,RT)较长.同时可以看出,IST-TV、IST-CS和本文算法都能有效地抑制伪吉布斯效应.设定图2(a)、(b)分别显示了PSNR和RT随压缩采样比增长的变化曲线.可以看出,随着压缩采样比的增长,不同算法的重构能力逐渐提高.与其他算法相比,本文算法仍能获得较高的PSNR值,但花费的重构时间也相对较长.设定噪声标准差和压缩采样比不变,如图3所示不同算法随迭代次数增长获得的PSNR值.可以看出,本文算法具有比其他算法更快的收敛速度.测试另一幅天文图像(木星图像),设定压缩采样比为0.1和图4显示了不同算法重构得到的结果.可以看出,当压缩采样比较低时,本文算法仍可获得相对较好的视觉质量和较高的PSNR.测试更多的天文图像,表1显示了当不同算法随压缩采样比变化时获得的PSNR值;表2显示了当压缩采样比为0.3时,不同算法随的增加获得的PSNR值.从表1、2可以看出,针对不同的天文图像,本文算法仍具有较好的重建性能.测试本文算法对天文图像特征的重构能力,从原始月球图像中选取大小为512×512的局部特征图像作为实验图像.设定压缩采样比为0.3和图5显示了不同算法重构得到的结果.可以看出,与IST-CS算法相比,本文算法能恢复较多的细节特征.1)本文将压缩感知理论应用到天文图像去噪,并在迭代收缩阈值算法的基础上,提出了一种高性能改进算法.该算法首先使用BB步长算子调节收敛速度;其次使用提出的下降VisuShrink阈值对重构图像进行筛选;最后对重建图像进行循环平移以消除伪吉布斯效应.2)本文算法与IST、IST-TV和IST-CS算法的对比分析结果表明,本文算法可以以较快的收敛速度重构一幅清晰的天文图像,并且能有效地保护天文图像的细节特征.在压缩采样比较低时,本文算法也可获得较优的重构效果.3)虽然本文算法的收敛速度和去噪重建性能得到很大的提高,但是花费的重建时间较长,因此如何提高本文算法的重建速度是以后改进的方向.史小平(1965—),男,教授,博士生导师(编辑张红)【相关文献】[1] SKODRAS A, CHRISTOPOLOS C, EBRAHIMI T. The JPEG 2000 still image compression standard[J]. IEEE Signal Processing Magazine, 2001, 18(5): 36-58.DOI: 10. 1109/79.952804.[2] WALLANCEG K. The JPEG still picture compression standard[J]. IEEE Transactions on Consumer, 2002, 38(1): 18-34. DOI: 10.1109/30.125072.[3] SHI Xiaoping, ZHANG Jie. Reconstruction and transmission of astronomical image based on compressed sensing[J]. Journal of Systems Engineering and Electronics, 2016, 27(3):680-690.DOI: 10.1109/JSEE.2016.00071.[4] ELDARY C, KUTYNIOK G. Compressed sensing: theory and applications[M]. New York: Cambridge University Press, 2012:1-515.[5] BENEDETTO J J. Compressed sensing and its applications[M]. USA: Birkhauser, 2013:97-143.[6] CANDES E, ROMBERG J. Sparsity and incoherence in compressive sampling[J]. Inverse Problems, 2007, 23(3): 969-985.DOI:10.1088/0266-5611/23/3/008.[7] DONOHO D L. Compressed sensing[J]. IEEE Transactions on Information Theory, 2006, 52(4): 1289-1306.DOI: 10.1109/TIT.2006.871582.[8] CANDES E J, TAO T. Decoding by linear programming[J]. IEEE Transactions on Information Theory, 2009, 51(12): 4203-4215.DOI:10.1109/TIT.2005.858979.[9] FIGUEIREDOM A T, NOWAK R D, WRIGHT S J. Gradient Projection for sparse reconstruction: application to compressed sensing and other inverse problems[J]. IEEE Journal of Selected Topics in Signal Processing, 2007,1(4):586-597.DOI:10.1109/JSTSP.2007.910281.[10]BLUMENSATH T, DAVIES M E. Iterative hard thresholding for compressed sensing[J]. Applied and Computational Harmonic Analysis, 2009, 27(3):265-274. DOI:10.1016/j. acha.2009.04.002.[11]CAI Jianfeng, OSHER S, SHEN Zuowei. Linearized bregman iterations for compressed sensing[J]. Mathematics of Computation, 2009, 78(267): 1515-1536. DOI:10.1090/S0025-5718-08- 02189-3.[12]DAUBECHIES I, DEFRISE M, MOL C D. An iterative thresholding algorithm for linear inverse problems with a sparisity constraint[J]. Communications on Pure and Applied Mathematics, 2004, 57(11):1413-1457.DOI:10.1002/ cpa.20042.[13]GARG R, KHANDEKAR R. Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property[C]//Proceedings of the 26th Annual International Conference on Machine Learning. New York, NY: ACM, 2009: 337-344. DOI:10.1145/1553374.1553417.[14]JI Shihao, XUE Ya, CARIN L. Bayesian compressive sensing[J]. IEEE Transactions on Signal Processing, 2008, 56(6): 2346-2356. DOI: 10.1109/TSP.2007.914345.[15]BARZILAI J, BORWEIN J M. Two point step size gradient methods[J]. IMA Journal of Numerical Analysis, 1988, 8(1):141-148.DOI: 10.1093/imanum/8.1.141.[16]王蓓,张根耀,李智,等. 基于新阈值函数的小波阈值去噪算法[J].计算机应用,2014, 34(5): 1499-1502.DOI:10. 11772/j.issn.1001-9081.2004.05.1499.WANG Pei, ZHANG Genyao, LI Zhi, et al. Wavelet threshold denoising algorithm based on new threshold function[J]. Journal of Computer Applications, 2014, 34(5): 1499-1502.DOI: 10.11772/j.issn.1001-9081.2004.05.1499.[17]BINHN T, KHARE A. Multilevel threshold based image denoising in curvelet domain[J]. Journal of Computer Science, 2010, 25(3): 632-640. DOI:10.1007/s11390- 010-9352-y. [18]郭海涛,赵红叶,徐雷,等. 基于循环平移和DTCWT的声呐图像滤波方法[J]. 仪器仪表学报,2015,36(6): 1351-1356.DOI: 10.3969/j.issn.0254-3087.2015.06.020.GUO Haitao, ZHAO Hongye, XU Lei, et al. Sonar image filtering method based on cycle shift and DTCWT[J]. Chinese Journal of Scientific Instrument, 2015,36(6): 1351-1356.DOI: 10.3969/j.issn.0254-3087.2015.06.020.[19]CANDES E J, ROMBERG J, TAO T. Robust uncertainty principles: exact signal reconstruction from highly incomplete information[J]. IEEE Transactions on Information Theory, 2006, 52(2): 489-509. DOI: 10.1109/ TIT.2005.862083.[20]YUAN Yaxiang. A new stepsize for the steepest descent method[J]. Journal of Computational Mathematics, 2006, 24(2):149-156.DOI: 10.1063/1.4882499.。
Compressive_Sensing
“压缩传感”引论香港大学电机电子工程学系高效计算方法研究小组沙威 2008年11月20日Email: wsha@eee.hku.hkPersonal Website: http://www.eee.hku.hk/~wsha到了香港做博士后,我的研究领域继续锁定计算电磁学。
我远离小波和计算时谐分析有很长的时间了。
尽管如此,我仍然陆续收到一些年轻学者和工程人员的来信,和我探讨有关的问题。
对这个领域,我在理论上几乎有很少的贡献,唯一可以拿出手的就是几篇网上发布的帖子和一些简单的入门级的代码。
但是,从小波和其相关的理论学习中,我真正懂得了一些有趣的知识,并获益良多。
我深切地感到越来越多的学者和工程师开始使用这个工具解决实际的问题,我也发现互联网上关于这方面的话题多了起来。
但是,我们永远不能只停留在某个阶段,因为当今学术界的知识更新实在太快。
就像我们学习了一代小波,就要学习二代小波;学习了二代小波,就要继续学习方向性小波(X-let)。
我也是在某个特殊的巧合下不断地学习某方面的知识。
就像最近,我的一个友人让我帮她看看“压缩传感”(Compressive Sensing)这个话题的时候,我的兴趣又一次来了。
我花了一个星期,阅读文献、思考问题、编程序、直到写出今天的帖子。
我希望这篇帖子,能对那些没进入且迫切想进入这个领域的学者和工程师有所帮助。
并且,我也希望和我一个星期前一样,对这个信号处理学界的“一个大想法”(A Big Idea)丝毫不了解的人,可以尝试去了解它。
我更希望,大家可以和我探讨这个问题,因为我到现在甚至不完全确定我对压缩传感的某些观点是否正确,尽管我的简单的不到50行的代码工作良好。
在这个领域中,华裔数学家陶哲轩和斯坦福大学的统计学家David Donoho 教授做出了重要的贡献。
在这个引言中,我用简单的关键字,给出我们为什么需要了解甚至是研究这个领域的原因。
那是因为,我们从中可以学习到,下面的这些:矩阵分析、统计概率论、拓扑几何、优化与运筹学、泛函分析、时谐分析(傅里叶变换、小波变换、方向性小波变换、框架小波)、信号处理、图像处理等等。
陶 哲 轩 的 1 0 岁 与 3 0 岁 ( 2 0 2 0 )
压缩感知(compressed sensing)的通俗解释在我看来,压缩感知是信号处理领域进入21世纪以来取得的最耀眼的成果之一,并在磁共振成像、图像处理等领域取得了有效应用。
压缩感知理论在其复杂的数学表述背后蕴含着非常精妙的思想。
基于一个有想象力的思路,辅以严格的数学证明,压缩感知实现了神奇的效果,突破了信号处理领域的金科玉律——奈奎斯特采样定律。
即,在信号采样的过程中,用很少的采样点,实现了和全采样一样的效果。
------------------------------------------------------------------------------------------------------------------------------------------------------------一、什么是压缩感知(CS)?compressed sensing又称compressed sampling,似乎后者看上去更加直观一些。
没错,CS是一个针对信号采样的技术,它通过一些手段,实现了“压缩的采样”,准确说是在采样过程中完成了数据压缩的过程。
因此我们首先要从信号采样讲起:1. 我们知道,将模拟信号转换为计算机能够处理的数字信号,必然要经过采样的过程。
问题在于,应该用多大的采样频率,即采样点应该多密多疏,才能完整保留原始信号中的信息呢?2. 奈奎斯特给出了答案——信号最高频率的两倍。
一直以来,奈奎斯特采样定律被视为数字信号处理领域的金科玉律。
3. 至于为什么是两倍,学过信号处理的同学应该都知道,时域以τ为间隔进行采样,频域会以1-τ为周期发生周期延拓。
那么如果采样频率低于两倍的信号最高频率,信号在频域频谱搬移后就会发生混叠。
4. 然而这看似不容置疑的定律却受到了几位大神的挑战。
Candes 最早意识到了突破的可能,并在不世出的数学天才陶哲轩以及Candes 的老师Donoho的协助下,提出了压缩感知理论,该理论认为:如果信号是稀疏的,那么它可以由远低于采样定理要求的采样点重建恢复。
稀疏优化与低秩矩阵优化演示文稿
条件
.从而得到稀疏优化模型: x k
0பைடு நூலகம்
min xTVx
s.t. Ax b,
x 0,
(3)
|| x ||0 k.
第10页,共23页。
三、应用实例
例2、互补问题的稀疏解
众说周知,二人矩阵博弈模型、具有生产和投资的经济均衡模型、交
通流均衡模型等,都可以转化为互补问题.如果这个互补问题有多个解 ,则在这个解集中寻找一个最为稀疏的解:
第16页,共23页。
三、应用实例
例6、Sparse-Viso CT CT是医学诊断的主要工具之一,其成像的数学原理是Ax=b,其中
x是一个未知向量,表示人体待检查部位的图像,维数512*512代表像素个
数,b是一个测量值,其维数为1160*672代表射线的条数(1160代表 圆周划分的角度),A是(1160*672,512*512)阶矩阵,由物理规律 得到.由于其行数大于列数,一般情况下该方程无解.更为可怕的是行 数多意味着“吃线”多,对人的身体危害极大.期望:“吃线少、时 间短、图像清晰”.现在,假设人体待检查部位的图像稀疏,那么应用稀
矩阵秩极小(或低秩矩阵恢复)问题是指在某种线性约束条件下,求一个
决策矩阵使其秩达到极小.它的基本数学模型是:
min rank( X ) s.t. A( X ) b, (2) 其中 A是从 Rm到n 的Rd线性变换,b 是一个 d 维向量.
第3页,共23页。
一、模型与基本概念
【注】
1、模型(2)是模型(1)的一个推广; 2、可以把零范数和秩函数放在约束中,变成 非凸 约束
疏优化理论和算法,可以进行研究。
第17页,共23页。
四、理论与算法
凸松弛理论和算法
浅谈压缩感知(八):两篇科普文章
浅谈压缩感知(⼋):两篇科普⽂章分享两篇来⾃科学松⿏会的科普性⽂章:1、(陶哲轩,Terence Tao)2、(Jordan Ellenberg)英⽂名:⼀、压缩感知与单像素相机按:这是数学家陶哲轩在上写的⼀篇科普⽂章,讨论的是近年来在应⽤数学领域⾥最热门的话题之⼀:压缩感知(compressed sensing)。
所谓压缩感知,最核⼼的概念在于试图从原理上降低对⼀个信号进⾏测量的成本。
⽐如说,⼀个信号包含⼀千个数据,那么按照传统的信号处理理论,⾄少需要做⼀千次测量才能完整的复原这个信号。
这就相当于是说,需要有⼀千个⽅程才能精确地解出⼀千个未知数来。
但是压缩感知的想法是假定信号具有某种特点(⽐如⽂中所描述得在⼩波域上系数稀疏的特点),那么就可以只做三百次测量就完整地复原这个信号(这就相当于只通过三百个⽅程解出⼀千个未知数)。
可想⽽知,这件事情包含了许多重要的数学理论和⼴泛的应⽤前景,因此在最近三四年⾥吸引了⼤量注意⼒,得到了⾮常蓬勃的发展。
陶哲轩本⾝是这个领域的奠基⼈之⼀(可以参考⼀⽂),因此这篇⽂章的权威性⽏庸讳⾔。
另外,这也是⽐较少见的由⼀流数学家直接撰写的关于⾃⼰前沿⼯作的普及性⽂章。
需要说明的是,这篇⽂章是虽然是写给⾮数学专业的读者,但是也并不好懂,也许具有⼀些理⼯科背景会更容易理解⼀些。
【作者 Terence Tao;译者⼭寨盲流,他的更多译作在,;校对⽊遥】最近有不少⼈问我究竟"压缩感知"是什么意思(特别是随着最近这个概念名声⼤噪),所谓“单像素相机”⼜是怎样⼯作的(⼜怎么能在某些场合⽐传统相机有优势呢)。
这个课题已经有了⼤量⽂献,不过对于这么⼀个相对⽐较新的领域,还没有⼀篇优秀的⾮技术性介绍。
所以笔者在此⼩做尝试,希望能够对⾮数学专业的读者有所帮助。
具体⽽⾔我将主要讨论摄像应⽤,尽管压缩传感作为测量技术应⽤于⽐成像⼴泛得多的领域(例如天⽂学,核磁共振,统计选取,等等),我将在帖⼦结尾简单谈谈这些领域。
基于压缩感知的凸优化算法研究及应用
分类号:密级:UDC:编号:工学硕士学位论文基于压缩感知的凸优化算法研究及应用硕士研究生:郝东斌指导 教师:张春杰 副教授学科、专业:信息与通信工程论文主审人:邓志安 副教授哈尔滨工程大学2019年3月分类号:密级:UDC:编号:工学硕士学位论文基于压缩感知的凸优化算法研究及应用硕士研究生:郝东斌指导教师:张春杰 副教授学位级别:工学硕士学科、专业:信息与通信工程所在单位:信息与通信工程学院论文提交日期:2019年1月论文答辩日期:2019年3月学位授予单位:哈尔滨工程大学Classified Index:U.D.C:A Dissertation for the Degree of M.EngResearch and Application of Convex Optimization Algorithms Based on Compressed SensingCandidate:Hao DongbinSupervisor:Assoc. Prof. Zhang ChunjieAcademic Degree Applied for:Master of EngineeringSpecialty:Information and Communication EngineeringDate of Submission:Jan, 2019Date of Oral Examination:Mar, 2019University:Harbin Engineering University哈尔滨工程大学学位论文原创性声明本人郑重声明:本论文的所有工作,是在导师的指导下,由作者本人独立完成的。
有关观点、方法、数据和文献的引用已在文中指出,并与参考文献相对应。
除文中已注明引用的内容外,本论文不包含任何其他个人或集体已经公开发表的作品成果。
对本文的研究做出重要贡献的个人和集体,均已在文中以明确方式标明。
本人完全意识到本声明的法律结果由本人承担。
KSVD-MOD
声明:本人属于绝对的新手,刚刚接触“稀疏表示”这个领域。
之所以写下以下的若干个连载,是鼓励自己不要急功近利,而要步步为赢!所以下文肯定有所纰漏,敬请指出,我们共同进步!踏入“稀疏表达”(Sparse Representation)这个领域,纯属偶然中的必然。
之前一直在研究压缩感知(Compressed Sensing)中的重构问题。
照常理来讲,首先会找一维的稀疏信号(如下图)来验证CS 理论中的一些原理,性质和算法,如测量矩阵为高斯随机矩阵,贝努利矩阵,亚高斯矩阵时使用BP,MP,OMP 等重构算法的异同和效果。
然后会找来二维稀疏信号来验证一些问题。
当然,就像你所想的,这些都太简单。
是的,接下来你肯定会考虑对于二维的稠密信号呢,如一幅lena图像?我们知道CS理论之所以能突破乃奎斯特采样定律,使用更少的采样信号来精确的还原原始信号,其中一个重要的先验知识就是该信号的稀疏性,不管是本身稀疏,还是在变换域稀疏的。
因此我们需要对二维的稠密信号稀疏化之后才能使用CS的理论完成重构。
问题来了,对于lena图像这样一个二维的信号,其怎样稀疏表示,在哪个变换域上是稀疏的,稀疏后又是什么?于是竭尽全力的google...后来发现了马毅的“Image Super-Resolution via Sparse Representation”(IEEE Transactions on Image Processing,Nov.2010)这篇文章,于是与稀疏表达的缘分开始啦!谈到稀疏表示就不能不提下面两位的团队,Yi Ma AND Elad Michael,国内很多高校(像TSinghua,USTC)的学生直奔两位而去。
(下图是Elad M的团队,后来知道了CS界大牛Donoho是Elad M的老师,怪不得...)其实对于马毅,之前稍有了解,因为韦穗老师,我们实验室的主任从前两年开始着手人脸识别这一领域并且取得了不错的成绩,人脸识别这个领域马毅算是大牛了...因此每次开会遇到相关的问题,韦老师总会提到马毅,于是通过各种渠道也了解了一些有关他科研和个人的信息。
复杂网络前沿
x
Neighbors of x
y
matching
N
Full network
Z. Shen, W.-X. Wang*, Y. Fan, Z. Di and Y.-C. Lai, Nature Communications, to appear in 2014.
Reconstruction performance
Lett., 94, 48006 (2011). • 推断隐藏节点 Phys. Rev. E 85, 065201(R) (2012). • 预测时间序列同步 Phys. Rev. E 85, 056220 (2012). • 重构通讯网络和路由策略(finished) • 重构最后通牒博弈网络(finished) • 重构公共品博弈网络(ongoing) • 重构基因调控网络(ongoing) • 重构布尔动力学网络(ongoing) • 重构复合种群网络(病毒传播) (ongoing) • 重构意见动力学网络(ongoing) • 重构神经元网络(ongoing)
预测人的移动能力和交通拥塞
• 热传导模型(小勇) • 宏微观统一预测模型(小勇) • 预测交通拥塞
信息熵和可预测性
将不同路段平均速度分段,构造符号序列,计算路段的熵和可预测性
车速与可预测性
复杂网络的控制
How to control a car
Complex network Controlling complex networks is ultimate goal!!!!!
Z. Shen, W.-X. Wang*, Y. Fan, Z. Di and Y.-C. Lai, Nature Communications, to appear in 2014.
科学松鼠会?数学突破奖解析:告诉你真实的数学研究
科学松⿏会?数学突破奖解析:告诉你真实的数学研究科学是⽬前⼈类探知客观世界最好的⽅式。
对于科学的投⼊,尽管不⼀定能⼀蹴⽽就得到什么切实有⽤的成果,但长远来看却是技术发展最好的动⼒源。
与技术开发不同,对科学的投⼊更像是公益活动,因为科学研究得到的成果属于全⼈类。
⽽数学作为科学的语⾔,也有着类似的性质。
在⽬前富豪争相投⾝公益事业的社会潮流下,我们能听到的科学相关奖项也越来越多。
除去⽼牌的菲尔兹奖、诺贝尔奖以外,我们时不时还能听到⼀些新的奖项。
在前不久的6⽉23⽇,⼜有⼀个新的奖项横空出世,它名为“数学突破奖”,它的⽬标是“认可本领域内的重要进展,向最好的数学家授予荣誉,⽀持他们未来的科研事业,以及向⼀般公众传达数学激动⼈⼼之处”。
这个奖项引⼈注⽬之处之⼀它的奖⾦来源:Facebook的创始⼈扎克伯格以及数码天空科技的创始⼈之⼀⽶尔诺。
此前他们还设⽴了“基础物理突破奖”与“⽣命科学突破奖”,合作者更包括Google创始⼈之⼀布林以及阿⾥巴巴的创始⼈马云。
他们都是互联⽹造就的新贵,⼤概也正因如此,他们也更理解科学的重要性,因为正是科学的飞速发展,带来了⽇新⽉异的信息技术,也给他们带来了庞⼤的财富。
另⼀个引⼈注⽬之处则是⾼昂的奖⾦:三百万美元,这是诺贝尔奖的2.5倍有余,与解决3个克雷研究所千年难题所能获得的⾦额相同。
这是科学奖项⽬前最⾼的奖⾦,它很好地完成了吸引公众眼球的任务。
那么,这次的获奖者都有哪些呢?他们的贡献⼜是什么呢?唐纳森(Simon Donaldson),来⾃⽯溪⼤学以及伦敦帝国学院,他因“四维流形⾰命性的西蒙·唐纳森新不变量,以及在丛以及法诺簇两⽅⾯,对其中代数⼏何与全局微分⼏何中稳定性之间联系的研究”⽽获奖。
孔采维奇(Maxim Kontsevich),来⾃法国⾼等科学研究院,他因“在包括代数⼏马克西姆·孔采维奇何、形变理论、⾟拓扑、同调代数以及动⼒系统等在数学众多领域中产⽣深刻影响的⼯作”⽽获奖。
陶哲轩调和讲义
陶哲轩调和讲义前言陶哲轩,全名陶哲轩(Terence Tao),1975年出生于澳大利亚,是一位华裔数学家。
他以在数学领域的杰出贡献而闻名,并因此获得了多个国际奖项和荣誉。
陶哲轩的研究领域涵盖了许多数学分支,包括解析数论、偏微分方程、组合数学等。
本讲义旨在介绍陶哲轩的调和函数相关内容。
调和函数是一类重要的数学函数,在物理、工程、计算机科学等领域有广泛应用。
通过深入理解调和函数及其性质,我们可以更好地理解数学中的各种问题。
一、调和函数的定义在数学中,调和函数是指满足拉普拉斯方程的实值函数。
具体来说,对于二维平面上的函数u(x, y),如果它满足以下方程:∇²u = ∂²u/∂x² + ∂²u/∂y² = 0则称u(x, y)为调和函数。
其中∇²表示拉普拉斯算子。
二、基本性质1.调和函数的线性性质:如果u₁(x, y)和u₂(x, y)是调和函数,那么它们的线性组合c₁u₁(x, y) + c₂u₂(x, y)也是调和函数,其中c₁和c₂为常数。
2.调和函数的最大值原理:如果u(x, y)是定义在有界区域D上的调和函数,那么它在D的边界上取得最大值或最小值。
换句话说,调和函数不能在其内部达到极值。
3.调和函数的平均值性质:对于任意圆盘B(R)内的点(x₀, y₀),其中B(R)表示以点(x₀, y₀)为中心、半径为R的圆盘,有以下等式成立:u(x₀, y₀) = (1/(2πR))∫∫[B(R)] u(x, y)dxdy这个等式表明,在圆盘B(R)上任意点处的调和函数值等于该圆盘上所有点处调和函数值的平均。
三、常见类型1.常数函数:f(x,y)=C是最简单的调和函数,其中C为常数。
2.一次齐次多项式:f(x,y)=ax+by是一个一次齐次多项式,其中a、b为常数。
3.对数型调和函数:f(x,y)=ln(r),其中r为点(x,y)到原点的距离。
压缩感知_研究现状概述
0
-0.5
-1
-1.5
0
50
100
150
200
250
300
2
1.5
1
0.5
0
-0.5
-1
-1.5
0
50
100
150
200
250
300
2 Recov ery Original 1.5
1
0.5
0
-0.5
-1
-1.5
0
50
100
150
200
250
300
23
Thank you!
24
6
概念及背景
7
概念及背景
compressive sensing实际上是对信号采集的颠覆性的理 论,打破了乃奎斯特采样(也称香农采样)。实际上,大部分 信号是稀疏的,没有必要用乃奎斯特采样进行时间离散化。 注意两点: (1)乃奎斯特采样对信号没有稀疏性的假设; (2)CS对信号有稀疏性假设,既s-稀疏; 压缩感知适合解决什么问题? (1)信号是稀疏的 (2) sensor方计算代价较大,receiver方计算代价较小( 即不适合将信息全部存储下来,而适合取少量信息,之后恢 复)
8
算法框架及具体内容
对于之前介绍过的编码表达式
其中, , ,Φ是n×N 的矩阵。
我们先将解码,就是试图通过y 反求x0, 记为Δ。我们用 Δ(y) 表示反求结果. 一般而言, 若n < N, 则有无数个x ∈ 满足y = Φx. 因而, 只有借助信号稀疏性的特征, 我们才有 可能反求原始的信号x0. 那么, 给定一编码、解码对(Φ, Δ), 我们关心其性能, 即
概念及背景其中y概念及背景感知压缩难点在于压缩后的数据并不是压缩前的数据的一个子集并不是说本来有照相机的感光器上有一千万个像素扔掉其中八百万个剩下的两百万个采集到的就是压缩后的图像这样只能采集到不完整的一小块图像有些信息被永远的丢失了而且不可能被恢复
An Introduction To Compressive Sampling全文翻译
图 1 当然,这个原则是大多数先进有损编码器的理论基础,这些有损编码器包括 JPEG-2000和其他压缩 格式,而一个简单的数据压缩方法就是由 f 计算得到 x ,然后(自适应的)编码求出 S 的位置和重要 系数的值。 由于重要信息段的位置可能预先未知 (它们与信号有关) , 这一过程需要知道所有 n 个系数 x , 因此这样一个过程需要所有 n 个系数 x 已知;在我们的例子中,那些重要信息往往聚集在图像的边缘位 置。一般而言,稀疏性是一个基本的建模要素,它能够带来高效率的基本信号处理;例如,精确的统计 估计和分类,有效的数据压缩,等等。本文所研究的内容大胆新颖且具有深远意义,其中稀疏性对于信 号采集起着重要支撑作用,稀疏性决定了如何有效、非自适应地采集信号。
2log n 。通过扩充具
有独立同分布元素的随机波形 k (t ) 也展示出与固定表示 有非常低的相干性,例如高斯分布或 1 二 进制项。注意到这里有一个非常奇特的暗示:如果非相干系统感知是良好的,那么有效的机制就应该获 得与随机波形的关联,例如白噪声!
欠采样和稀疏信号的重构
理论上, 我们希望可以测量 f 的所有 n 个系数, 但实际上我们只观测它的一个子集, 采集的数据为:
等于或接近于 1,那么 S log n 次采样就足够了,而不需要 n 次采样。 3) 在事先不知道 x 非零坐标个数、位置的条件下,信号 f 可以利用极小化凸泛函得到的压缩数据 集来重构,关于他们的幅值事先完全未知。 这个定理确实提供了一种非常具体的捕获方案: 在非相干域的非自适应采样和在采集之后的线性规 划。按照这一方法,可以获得压缩形式的信号。所需要的是一个解码器去解压数据。这就是 l1 -极小化 所起的作用。 事实上,这个非相干采样是早先谱稀疏信号采样结果的推广,由此展现了随机性是一个可靠的证明,并 可以成为一个非常有效的传感机制,也许正是因此引发了现在 CS 蓬勃的发展。假设我们对超宽带采样 感兴趣,但谱稀疏信号的形式为:
深度图像的分块自适应压缩感知
深度图像的分块自适应压缩感知王旭;王国中;范涛【期刊名称】《计算机应用研究》【年(卷),期】2016(033)003【摘要】In the depth coding system based on down /up sampling,the rendered virtual view suffers from image quality loss due to inaccurate edge information of depth images.Based on this fact,this paper proposed a block-based adaptive compressed sampling method of depth image.Based on the BCS-SPL framework combined block-based compressed sampling and smoothed Landweber projection,the proposed method used variance of blocks to describe edge information for adaptive sampling.The results show that under the same sampling rate,the proposed method achieves improvements both in objective and subjective rendering quality against down /up sampling and BCS-SPL.%针对基于空域上下采样的深度编码框架中,由边缘信息损失带来的视点绘制质量下降的问题,提出了一种面向视点绘制质量的深度图像分块自适应压缩采样方法。
在基于分块压缩感知和光滑 Landweber 投影重构的 BCS-SPL 框架下,利用图像块的方差表征其边缘信息,并据此进行自适应采样,以提高深度图像重构和视点合成质量。
压缩感知 中秩的计算
压缩感知中秩的计算
压缩感知(Compressed Sensing)是一种新兴的理论和技术,它突破了传统信号处理中奈奎斯特采样定理的限制,可以从远远少于奈奎斯特采样率的测量值重建出完整的稀疏或压缩信号。
压缩感知的核心思想是利用稀疏性先验,通过非适应线性测量获取信号的少量线性测量值,然后基于优化理论对高维未知信号进行重建。
中秩矩阵恢复(Matrix Completion)是压缩感知的一个重要分支,研究如何利用矩阵的低秩结构从部分观测值准确恢复出完整的矩阵。
中秩矩阵广泛存在于多种应用领域,如协同过滤推荐系统、计算机视觉、量子态重建等。
中秩的计算主要包括以下几个方面:
1. 理论分析
通过研究随机线性测量算子的性质,建立测量值与原始矩阵之间的准确恢复条件,给出所需的采样复杂度下界。
同时,分析中秩矩阵在不同噪声模型下的可恢复性。
2. 恢复算法设计
设计高效可靠的中秩矩阵恢复算法,主要包括核范数最小化、平滑度最优化等凸优化方法,以及基于因子分解的非凸优化方法。
这些算法在理论上可以恢复任意中秩矩阵,并具有较好的噪声鲁棒性。
3. 理论计算
研究基于压缩感知理论的新型算法,降低恢复复杂度,提高恢复精度。
包括自适应采样、分层恢复、并行分布式计算等技术,以满足海量数据的实时处理需求。
4. 应用拓展
将压缩感知中秩恢复理论应用于计算机视觉、信号处理、模式识别等多个领域,解决图像渲染、视频补全、基因数据处理等实际问题。
压缩感知中秩的计算理论和算法为高维数据的压缩采样与精确恢复提供了有力工具,在大数据时代具有重要的理论意义和应用价值。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Robust Uncertainty Principles:Exact Signal Reconstruction from Highly IncompleteFrequency InformationEmmanuel Candes †,Justin Romberg †,and Terence Tao†Applied and Computational Mathematics,Caltech,Pasadena,CA 91125 Department of Mathematics,University of California,Los Angeles,CA 90095June 2004;Revised August 2005AbstractThis paper considers the model problem of reconstructing an object from incomplete frequency samples.Consider a discrete-time signal f ∈C N and a randomly chosen set of frequencies Ω.Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω?A typical result of this paper is as follows.Suppose that f is a superposition of |T |spikes f (t )= τ∈T f (τ)δ(t −τ)obeying|T |≤C M ·(log N )−1·|Ω|,for some constant C M >0.We do not know the locations of the spikes nor their amplitudes.Then with probability at least 1−O (N −M ),f can be reconstructed exactly as the solution to the 1minimization problemmin g N −1 t =0|g (t )|,s.t.ˆg (ω)=ˆf(ω)for all ω∈Ω.In short,exact recovery may be obtained by solving a convex optimization problem.We give numerical values for C M which depend on the desired probability of success.Our result may be interpreted as a novel kind of nonlinear sampling theorem.In effect,it says that any signal made out of |T |spikes may be recovered by convex programming from almost every set of frequencies of size O (|T |·log N ).Moreover,this is nearly optimal in the sense that any method succeeding with probability 1−O (N −M )would in general require a number of frequency samples at least proportional to |T |·log N .The methodology extends to a variety of other situations and higher dimensions.For example,we show how one can reconstruct a piecewise constant (one-or two-dimensional)object from incomplete frequency samples—provided that the number of jumps (discontinuities)obeys the condition above—by minimizing other convex func-tionals such as the total variation of f .Keywords.Random matrices,free probability,sparsity,trigonometric expansions,uncertainty principle,convex optimization,duality in optimization,total-variation minimization,image recon-struction,linear programming.Acknowledgments. E.C.is partially supported by a National Science Foundation grant DMS 01-40698(FRG)and by an Alfred P.Sloan Fellowship.J.R.is supported by National Science Foundation grants DMS01-40698and ITR ACI-0204932.T.T.is a Clay Prize Fellow and is supported in part by grants from the Packard Foundation.E.C.and T.T.thank the Institute for Pure and Applied Mathematics at UCLA for their warm hospitality. E.C.would like to thank Amos Ron and David Donoho for stimulating conversations,and Po-Shen Loh for early numerical experiments on a related project.We would also like to thank Holger Rauhut for corrections on an earlier version and the anonymous referees for their comments and references.1IntroductionIn many applications of practical interest,we often wish to reconstruct an object(a discrete signal,a discrete image,etc.)from incomplete Fourier samples.In a discrete setting,we may pose the problem as follows;letˆf be the Fourier transform of a discrete object f(t), t=(t1,...,t d)∈Z d:={0,1,...,N−1}d,Nˆf(ω)=f(t)e−2πi(ω1t1+...+ωd t d)/N.t∈Z dNThe problem is then to recover f from partial frequency information,namely,fromˆf(ω), whereω=(ω1,...,ωd)belongs to some setΩof cardinality less than N d—the size of the discrete object.In this paper,we show that we can recover f exactly from observationsˆf|Ωon small set of frequencies provided that f is sparse.The recovery consists of solving a straightforward optimization problem thatfinds f of minimal complexity withˆf (ω)=ˆf(ω),∀ω∈Ω.1.1A puzzling numerical experimentThis idea is best motivated by an experiment with surprisingly positive results.Consider a simplified version of the classical tomography problem in medical imaging:we wish to reconstruct a2D image f(t1,t2)from samplesˆf|Ωof its discrete Fourier transform on a star-shaped domainΩ[4].Our choice of domain is not contrived;many real imaging devices collect high-resolution samples along radial lines at relatively few angles.Figure1(b) illustrates a typical case where one gathers512samples along each of22radial lines. Frequently discussed approaches in the literature of medical imaging for reconstructing an object from polar frequency samples are the so-calledfiltered backprojection algorithms.In a nutshell,one assumes that the Fourier coefficients at all of the unobserved frequencies are zero(thus reconstructing the image of“minimal energy”under the observation constraints). This strategy does not perform very well,and could hardly be used for medical diagnostics [24].The reconstructed image,shown in Figure1(c),has severe nonlocal artifacts caused by the angular undersampling.A good reconstruction algorithm,it seems,would have to guess the values of the missing Fourier coefficients.In other words,one would need to interpolateˆf(ω1,ω2).This seems highly problematic,however;predictions of Fourier coefficients from their neighbors are very delicate,due to the global and highly oscillatory nature of the Fourier transform.Going back to the example in Figure1,we can see the(c)(d)Figure1:Example of a simple recovery problem.(a)The Logan-Shepp phantom test image.(b)Sampling domainΩin the frequency plane;Fourier coefficients are sampled along22approximately radial lines.(c)Minimum energy reconstruction obtained by setting unobserved Fourier coefficients to zero.(d)Reconstruction obtained by minimizing the total variation,as in(1.1).The reconstruction is an exact replica of the image in(a).problem immediately.To recover frequency information near(2πω1/N,2πω2/N),where 2πω1/N is near±π,we would need to interpolateˆf at the Nyquist rate2π/N.However, we only have samples at rate aboutπ/22;the sampling rate is almost50times smaller than the Nyquist rate!We propose instead a strategy based on convex optimization.Let g T V be the total-variation norm of a two-dimensional object g.For discrete data g(t1,t2),0≤t1,t2≤N−1,g T V=t1,t2|D1g(t1,t2)|2+|D2g(t1,t2)|2,where D1is thefinite difference D1g=g(t1,t2)−g(t1−1,t2)and D2g=g(t1,t2)−g(t1,t2−1).To recover f from partial Fourier samples,wefind a solution f to the optimization problemmin g T V subject toˆg(ω)=ˆf(ω)for allω∈Ω.(1.1) In a nutshell,given partial observationˆf|Ω,we seek a solution f with minimum complexity—here Total Variation(TV)—and whose“visible”coefficients match those of the unknown object f.Our hope here is to partially erase some of the artifacts that classical reconstruc-tion methods exhibit(which tend to have large TV norm)while maintainingfidelity to the observed data via the constraints on the Fourier coefficients of the reconstruction.When we use(1.1)for the recovery problem illustrated in Figure1(with the popular Logan-Shepp phantom as a test image),the results are surprising.The reconstruction is exact; that is,f =f!This numerical result is also not special to this phantom.In fact,we performed a series of experiments of this type and obtained perfect reconstruction on many similar test phantoms.1.2Main resultsThis paper is about a quantitative understanding of this very special phenomenon.For which classes of signals/images can we expect perfect reconstruction?What are the trade-offs between complexity and number of samples?In order to answer these questions,we first develop a fundamental mathematical understanding of a special one-dimensional model problem.We then exhibit reconstruction strategies which are shown to exactly reconstruct certain unknown signals,and can be extended for use in a variety of related and sophisti-cated reconstruction applications.For a signal f∈C N,we define the classical discrete Fourier transform F f=ˆf:C N→C N byˆf(ω):=N−1t=0f(t)e−2πiωt/N,ω=0,1,...,N−1.(1.2)If we are given the value of the Fourier coefficientsˆf(ω)for all frequenciesω∈Z N,then one can obviously reconstruct f exactly via the Fourier inversion formulaf(t)=1NN−1ω=0ˆf(ω)e2πiωt/N.Now suppose that we are only given the Fourier coefficientsˆf|Ωsampled on some partial subsetΩ Z N of all frequencies.Of course,this is not enough information to reconstructf exactly in general;f has N degrees of freedom and we are only specifying|Ω|<N of those degrees(here and below|Ω|denotes the cardinality ofΩ).Suppose,however,that we also specify that f is supported on a small(but a priori unknown) subset T of Z N;that is,we assume that f can be written as a sparse superposition of spikesf(t)=τ∈Tf(τ)δ(t−τ),δ(t)=1{t=0}.In the case where N is prime,the following theorem tells us that it is possible to recover f exactly if|T|is small enough.Theorem1.1Suppose that the signal length N is a prime integer.LetΩbe a subset of {0,...,N−1},and let f be a vector supported on T such that|T|≤12|Ω|.(1.3)Then f can be reconstructed uniquely fromΩandˆf|Ω.Conversely,ifΩis not the set of all N frequencies,then there exist distinct vectors f,g such that|supp(f)|,|supp(g)|≤12|Ω|+1 and such thatˆf|Ω=ˆg|Ω.Proof We will need the following lemma[29],from which we see that with knowledge of T,we can reconstruct f uniquely(using linear algebra)fromˆf|Ω:Lemma1.2([29],Corollary1.4)Let N be a prime integer and T,Ωbe subsets of Z N. Put 2(T)(resp. 2(Ω))to be the space of signals that are zero outside of T(resp.Ω).The restricted Fourier transform F T→Ω: 2(T)→ 2(Ω)is defined asF T→Ωf:=ˆf|Ωfor all f∈ 2(T),If|T|=|Ω|,then F T→Ωis a bijection;as a consequence,we thus see that F T→Ωis injective for|T|≤|Ω|and surjective for|T|≥|Ω|.Clearly,the same claims hold if the Fourier transform F is replaced by the inverse Fourier transform F−1.To prove Theorem1.1,assume that|T|≤12|Ω|.Suppose for contradiction that there weretwo objects f,g such thatˆf|Ω=ˆg|Ωand|supp(f)|,|supp(g)|≤12|Ω|.Then the Fouriertransform of f−g vanishes onΩ,and|supp(f−g)|≤|Ω|.By Lemma1.2we see that F supp(f−g)→Ωis injective,and thus f−g=0.The uniqueness claim follows.We now examine the converse claim.Since|Ω|<N,we canfind disjoint subsets T,S of Z N such that|T|,|S|≤12|Ω|+1and|T|+|S|=|Ω|+1.Letω0be some frequency whichdoes not lie inΩ.Applying Lemma1.2,we have that F T∪S→Ω∪{ω0}is a bijection,and thuswe canfind a vector h supported on T∪S whose Fourier transform vanishes onΩbut is nonzero onω0;in particular,h is not identically zero.The claim now follows by taking f:=h|T and g:=−h|S.Note that if N is not prime,the lemma(and hence the theorem)fails,essentially because of the presence of non-trivial subgroups of Z N with addition modulo N;see Sections1.3and1.4for concrete counter examples,and[7],[29]for further discussion.However,it is plausible to think that Lemma1.2continues to hold for non-prime N if T andΩare assumed to be generic—in particular,they are not subgroups of Z N,or cosets of subgroups. If T andΩare selected uniformly at random,then it is expected that the theorem holds with probability very close to one;one can indeed presumably quantify this statement by adapting the arguments given above but we will not do so here.However,we refer the reader to Section1.7for a rapid presentation of informal arguments pointing in this direction.A refinement of the argument in Theorem1.1shows that forfixed subsets T,S of time domain anΩin the frequency domain,the space of vectors f,g supported on T,S such thatˆf|Ω=ˆg|Ωhas dimension|T∪S|−|Ω|when|T∪S|≥|Ω|,and has dimension|T∩S| otherwise.In particular,if we letΣ(N t)denote those vectors whose support has size at most N t,then set of the vectors inΣ(N t)which cannot be reconstructed uniquely in this class from the Fourier coefficients sampled atΩ,is contained in afinite union of linear spaces of dimension at most2N t−|Ω|.SinceΣ(N t)itself is afinite union of linear spaces of dimension N t,we thus see that recovery of f fromˆf|Ωis in principle possible generically whenever|supp(f)|=N t<|Ω|;once N t≥|Ω|,however,it is clear from simple degrees-of-freedom arguments that unique recovery is no longer possible.While our methods do not quite attain this theoretical upper bound for correct recovery,our numerical experiements suggest that they do come within a constant factor of this bound(see Figure2). Theorem1.1asserts that one can reconstruct f from2|T|frequency samples(and that, in general,there is no hope to do so from fewer samples).In principle,we can recover f exactly by solving the combinatorial optimization problem(P0)ming∈C N g,ˆg|Ω=ˆf|Ω,(1.4)where gis the number of nonzero terms#{t,g(t)=0}.This is a combinatorial optimization problem,and solving(1.4)directly is infeasible even for modest-sized signals. To the best of our knowledge,one would essentially need to let T vary over all subsetsT⊂{0,...,N−1}of cardinality|T|≤12|Ω|,checking for each one whether f is in therange of F T→Ωor not,and then invert the relevant minor of the Fourier matrix to recover f once T is determined.Clearly,this is computationally very expensive since there are exponentially many subsets to check;for instance,if|Ω|∼N/2,then the number of subsets scales like4N·3−3N/4!As an aside comment,note that it is also not clear how to make this algorithm robust,especially since the results in[29]do not provide any effective lower bound on the determinant of the minors of the Fourier matrix,see Section6for a discussion of this point.A more computationally efficient strategy for recovering f fromΩandˆf|Ωis to solve the convex problem(P1)ming∈C N g1:=t∈Z N|g(t)|,ˆg|Ω=ˆf|Ω.(1.5)The key result in this paper is that the solutions to(P0)and(P1)are equivalent for an overwhelming percentage of the choices for T andΩwith|T|≤α·|Ω|/log N(α>0is a constant):in these cases,solving the convex problem(P1)recovers f exactly.To establish this upper bound,we will assume that the observed Fourier coefficients are randomly sampled.Given the number Nωof samples to take in the Fourier domain,wechoose the subset Ωuniformly at random from all sets of this size;i.e.each of the N N ω possible subsets are equally likely.Our main theorem can now be stated as follows.Theorem 1.3Let f ∈C N be a discrete signal supported on an unknown set T ,and choose Ωof size |Ω|=N ωuniformly at random.For a given accuracy parameter M ,if|T |≤C M ·(log N )−1·|Ω|,(1.6)then with probability at least 1−O (N −M ),the minimizer to the problem (1.5)is unique and is equal to f .Notice that (1.6)essentially says that |T |is of size |Ω|,modulo a constant and a logarithmic factor.Our proof gives an explicit value of C M ,namely,C M 1/[23(M +1)](valid for |Ω|≤N/4,M ≥2and N ≥20,say)although we have not pursued the question of exactly what the optimal value might be.In Section 5,we present numerical results which suggest that in practice,we can expect to recover most signals f more than 50%of the time if the size of the support obeys |T |≤|Ω|/4.By most signals,we mean that we empirically study the success rate for randomly selected signals,and do not search for the worst case signal f —that which needs the most frequency samples.For |T |≤|Ω|/8,the recovery rate is above 90%.Empirically,the constants 1/4and 1/8do not seem to vary for N in the range of a few hundred to a few thousand.1.3For almost every ΩAs the theorem allows,there exist sets Ωand functions f for which the 1-minimization procedure does not recover f correctly,even if |supp(f )|is much smaller than |Ω|.We sketch two counter examples:•A discrete Dirac comb.Suppose that N is a perfect square and consider the picket-fence signal which consists of spikes of unit height and with uniform spacing equal to √N .This signal is often used as an extremal point for uncertainty principles [7,8]as one of its remarkable properties is its invariance through the Fourier transform.Hence suppose that Ωis the set of all frequencies but the multiples of √N ,namely,|Ω|=N −√N .Then ˆf|Ω=0and obviously the reconstruction is identically zero.Note that the problem here does not really have anything to do with 1-minimization per se;f cannot be reconstructed from its Fourier samples on Ωthereby showing that Theorem 1.1does not work “as is”for arbitrary sample sizes.•Boxcar signals .The example above suggests that in some sense |T |must not be greater than about |Ω|.In fact,there exist more extreme examples.Assume the sample size N is large and consider for example the indicator function f of the interval T :={t :−N −0.01<t −N/2<N 0.01}and let Ωbe the set Ω:={ω:N/3<ω<2N/3}.Let h be a function whose Fourier transform ˆhis a nonnegative bump function adapted to the interval {ω:−N/6<ω<N/6}which equals 1when −N/12<ω<N/12.Then |h (t )|2has Fourier transform vanishing in Ω,and is rapidly decreasing away from t =0;in particular we have |h (t )|2=O (N −100)for t ∈T .On the other hand,one easily computes that |h (0)|2>c for some absolute constant c >0.Because ofthis,the signal f−ε|h|2will have smaller 1-norm than f forε>0sufficiently small (and N sufficiently large),while still having the same Fourier coefficients as f onΩ.Thus in this case f is not the minimizer to the problem(P1),despite the fact that the support of f is much smaller than that ofΩ.The above counter examples relied heavily on the special choice ofΩ(and to a lesser extent of supp(f));in particular,it needed the fact that the complement ofΩcontained a large interval(or more generally,a long arithmetic progression).But for most setsΩ,large arithmetic progressions in the complement do not exist,and the problem largely disappears. In short,Theorem1.3essentially says that for most sets of T of size about|Ω|,there is no loss of information.1.4OptimalityTheorem1.3states that for any signal f supported on an arbitrary set T in the time domain, (P1)recovers f exactly—with high probability—from a number of frequency samples that is within a constant of M·|T|log N.It is natural to wonder whether this is a fundamental limit.In other words,is there an algorithm that can recover an arbitrary signal from far fewer random observations,and with the same probability of success?It is clear that the number of samples needs to be at least proportional to|T|,otherwise F T→Ωwill not be injective.We argue here that it must also be proportional to M log N to guarantee recovery of certain signals from the vast majority of setsΩof a certain size. Suppose f is the Dirac comb signal discussed in the previous section.If we want to have a chance of recovering f,then at the very least,the observation setΩand the frequency support W=suppˆf must overlap at one location;otherwise,all of the observations are zero,and nothing can be done.ChoosingΩuniformly at random,the probability that it includes none of the members of W isP(Ω∩W=∅)= N−√N|Ω|N|Ω|≥1−2|Ω|N√N,where we have used the assumption that|Ω|>|T|=√N.Then for P(Ω∩W=∅)to besmaller than N−M,it must be true that√N·log1−2|Ω|N≤−M log N,and if we make the restriction that|Ω|cannot be as large as N/2,meaning that log(1−2|Ω| N )≈−2|Ω|N,we have|Ω|≥Const·M·√·log N.For the Dirac comb then,any algorithm must have|Ω|∼|T|M log N observations for the identified probability of success.Examples for larger supports T exist as well.If N is an even power of two,we can su-perimpose2m Dirac combs at dyadic shifts to construct signals with time-domain support|T |=2m √N and frequency-domain support |W |=2−m √N for m =0,...,log 2√N .The same argument as above would then dictate that|Ω|≥Const ·M ·N |W |·log N =Const ·M ·|T |·log N.In short,Theorem 1.3identifies a fundamental limit.No recovery can be successful for all signals using significantly fewer observations.1.5ExtensionsAs mentioned earlier,results for our model problem extend easily to higher dimensions and alternate recovery scenarios.To be concrete,consider the problem of recovering a one-dimensional piecewise constant signal viamin g t ∈Z N|g (t )−g (t −1)|ˆg |Ω=ˆf |Ω,(1.7)where we adopt the convention that g (−1)=g (N −1).In a nutshell,model (1.5)is obtained from (1.7)after differentiation.Indeed,let δbe the vector of first difference δ(t )=g (t )−g (t −1),and note that δ(t )=0.Obviously,ˆδ(ω)=(1−e −2πiω/N )ˆg (ω),for all ω=0and,therefore,with υ(ω)=(1−e −2πiω/N )−1,the problem is identical tomin δ δ 1ˆδ|Ω\{0}=(υˆf )|Ω\{0},ˆδ(0)=0,which is precisely what we have been studying.Corollary 1.4Put T ={t,f (t )=f (t −1)}.Under the assumptions of Theorem 1.3,the minimizer to the problem (1.7)is unique and is equal f with probability at least 1−O (N −M )—provided that f be adjusted so that f (t )=ˆf(0).We now explore versions of Theorem 1.3in higher dimensions.To be concrete,consider the two-dimensional situation (statements in arbitrary dimensions are exactly of the same flavor):Theorem 1.5Put N =n 2.We let f (t 1,t 2),1≤t 1,t 2≤n be a discrete real-valued image and Ωof a certain size be chosen uniformly at random.Assume that for a given accuracy parameter M ,f is supported on T obeying (1.6).Then with probability at least 1−O (N −M ),the minimizer to the problem (1.5)is unique and is equal to f .We will not prove this result as the strategy is exactly parallel to that of Theorem 1.3.Letting D 1f be the horizontal finite differences D 1f (t 1,t 2)=f (t 1,t 2)−f (t 1−1,t 2)and D 2f be the vertical analog,we have just seen that we can think about the data as the properly renormalized Fourier coefficients of D 1f and D 2f .Now put d =D 1f +iD 2f ,where i 2=−1.Then the minimum total-variation problem may be expressed asmin δ 1subject to F Ωδ=F Ωd,(1.8)where FΩis a partial Fourier transform.One then obtains a statement for piecewise constant 2D functions,which is similar to that for sparse1D signals provided that the support of f be replaced by{(t1,t2):|D1f(t1,t2)|2+|D2f(t1,t2)|2=0}.We omit the details.The main point here is that there actually are a variety of results similar to Theorem 1.3.Theorem1.5serves as another recovery example,and provides a precise quantitative understanding of the“surprising result”discussed at the beginning of this paper.To be complete,we would like to mention that for complex valued signals,the minimum 1problem(1.5)and,therefore,the minimum TV problem(1.1)can be recast as special convex programs known as second order cone programs(SOCP’s).For example,(1.8)is equivalent tomint u(t)subject to|δ1(t)|2+|δ2(t)|2≤u(t),(1.9)FΩ(δ1+iδ2)=FΩd,with variables u,δ1andδ2in R N(δ1andδ2are the real and imaginary parts ofδ).If in addition,δis real valued,then this is a linear program.Much progress has been made in the past decade on algorithms to solve both linear and second-order cone programs[25], and many off-the-shelf software packages exist for solving problems such as(P1)and(1.9).1.6Relationship to uncertainty principlesFrom a certain point of view,our results are connected to the so-called uncertainty principles [7,8]which say that it is difficult to localize a signal f∈C N both in time and frequency at the same time.Indeed,classical arguments show that f is the unique minimizer of(P1)if and only ift∈Z N |f(t)+h(t)|>t∈Z N|f(t)|,∀h=0,ˆh|Ω=0Put T=supp(f)and apply the triangle inequalityZ N |f(t)+h(t)|=T|f(t)+h(t)|+T c|h(t)|≥T|f(t)|−|h(t)|+T c|h(t)|.Hence,a sufficient condition to establish that f is our unique solution would be to showthatT |h(t)|<T c|h(t)|∀h=0,ˆh|Ω=0.or equivalentlyT|h(t)|<12h1.The connection with the uncertainty principle is nowexplicit;f is the unique minimizer if it is impossible to concentrate half of the 1norm of a signal that is missing frequency components inΩon a“small”set T.For example,[7] guarantees exact reconstruction if2|T|·(N−|Ω|)<N.Take|Ω|<N/2,then that condition says that|T|must be zero which is far from being the content of Theorem1.3.By refining these uncertainty principles,[6]shows that a much stronger recovery result is possible.The central results of[6]imply that a signal consisting of|T|spikes which are spread out in a somewhat even manner in the time domain can be recovered from C·|T| lowpass observations.Theorem1.3is different in that it applies to all signals with a certain support size,and does not rely on a special choice ofΩ(almost anyΩwhich is large enough will work).The price for this additional power is that we require a factor of log N more observations.In truth,this paper does not follow this classical approach of deriving a recovery condition directly from an uncertainty principle.Instead,we will use duality theory to study the solution of(P1).However,a byproduct of our analysis will be a novel uncertainty principle that holds for generic sets T,Ω.1.7Robust uncertainty principlesUnderlying our results is a new notion of uncertainty principle which holds for almost any pair(supp(f),supp(ˆf)).With T=supp(f)andΩ=supp(ˆf),the classical discrete uncertainty principle[7]says that|T|+|Ω|≥2√(1.10)with equality obtained for signals such as the Dirac comb.As we mentioned above,such extremal signals correspond to very special pairs(T,Ω).However,for most choices of T andΩ,the analysis presented in this paper shows that it is impossible tofind f such that T=supp(f)andΩ=supp(ˆf)unless|T|+|Ω|≥γ(M)·(log N)−1/2·N,(1.11) which is considerably stronger than(1.10).Here,the statement“most pairs”says again that the probability of selecting a random pair(T,Ω)violating(1.11)is at most O(N−M). In some sense,(1.11)is the typical uncertainty relation one can generally expect(as opposed to(1.10)),hence,justifying the title of this paper.Because of space limitation,we are unable to elaborate on this fact and its implications further,but will do so in a companion paper.1.8Connections with existing workThe idea of relaxing a combinatorial problem into a convex problem is not new and goes back a long way.For example,[5,27]used the idea of minimizing 1norms to recover spike trains.The motivation is that this makes available a host of computationally feasible procedures.For example,a convex problem of the type(1.5)can be practically solved using techniques of linear programming such as interior point methods[3].Using an 1minimization program to recover sparse signals has been proposed in several different contexts.Early work in geophysics[23,26,27]centered on super-resolving spike trains from bandlimited observations,i.e.the case whereΩconsists of low-pass ter works[6,7]provided a unified framework in which to interpret these results by demonstrating that the effectiveness of recovery via minimizing 1was linked to discrete uncertainty principles.As mentioned in Section1.6,these papers derived explicit boundson the number of frequency samples needed to reconstruct a sparse signal.The earlier reference[7]also contains a conjecture that more powerful uncertainty principles may exist if one of T,Ωis chosen at random,which is essentially the content of Section1.7here. More recently,there exists a series of beautiful papers[8–10,13,18]concerned with prob-lem offinding the sparsest decomposition of a signal f using waveforms from a highly overcomplete dictionary D.One seeks the sparsestαsuch thatf=Dα,(1.12)where the number of columns M from D is greater than the sample size N.Consider the solution which minimizes the 0norm ofαsubject to the constraint(1.12)and that which minimizes the 1norm.A typical result of this body of work is as follows:suppose that s can be synthesized out of very few elements from D,then the solution to both problems are unique and are equal.We also refer to[30,31]for very recent results along these lines. This literature certainly influenced our thinking in the sense it made us suspect that results such as Theorem1.3were actually possible.However,we would like to emphasize that the claims presented in this paper are of a substantially different nature.We give essentially two reasons:1.Our model problem is different since we need to“guess”a signal from incompletedata,as opposed tofinding the sparsest expansion of a fully specified signal.2.Our approach is decidedly probabilistic—as opposed to deterministic—and thus callsfor very different techniques.For example,underlying our analysis are delicate esti-mates for the norms of certain types of random matrices,which may be of independent interest.Apart from the wonderful properties of 1,several novel sampling theorems have in intro-duced in recent years.In[11,12]the authors study universal sampling patters that allow the exact reconstruction of signals supported on a small set.In[32],ideas from spectral analysis are leveraged to show that a sequence of N t spikes can be recovered exactly from 2N t+1consecutive Fourier samples(in[32]for example,the recovery requires solving a system of equations and factoring a polynomial).Our results,namely,Theorems1.1and 1.3require slightly more samples to be taken(C·N t log N versus C·N t),but are again more general in that they address the radically different situation in which we do not have the freedom to choose the sample locations at our convenience.Finally,it is interesting to note that our results and the references above are also related to recent work[15]infinding near-best B-term Fourier approximations(which is in some sense the dual to our recovery problem).The algorithm in[15,16],which operates by estimating the frequencies present in the signal from a small number of randomly placed samples, produces with high probability an approximation in sublinear time with error within a constant of the best B-term approximation.First,in[16]the samples are again selected to be equispaced whereas we are not at liberty to choose the frequency samples at all since they are specified a priori.And second,we wish to produce as a result an entire signal or image of size N,so a sublinear algorithm is an impossibility.。