压缩感知重构算法之基追踪
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
压缩感知重构算法之基追踪(Basis Pursuit ,BP )
除匹配追踪类贪婪迭代算法之外,压缩感知重构算法另一大类就是凸优化算法或最优化逼近方法,这类方法通过将非凸问题转化为凸问题求解找到信号的逼近,其中最常用的方法就是基追踪(Basis Pursuit, BP),该方法提出使用1l 范数替代0l 范数来解决最优化问题,以便使用线性规划方法来求解[1]。
本篇我们就来讲解基追踪方法。
理解基追踪方法需要一定的最优化知识基础,可参见最优化方法分类中的内容。
1、l1范数和l0范数最小化的等价问题
在文献【2】的第4部分,较为详细的证明了1l 范数与0l 范数最小化在某条件下等价。
证明过程是一个比较复杂的数学推导,这里尽量引用文献中的原文来说明。
首先,在文献【2】的4.1节,给出了(P1)问题,并给出了(P1)的线性规划等价形式(LP),这个等价关系后面再详叙。
4.1 The Case 1p =
In the case 1p =, (1P ) is a convex optimization problem. Write it out in an equivalent form, with
θ being the optimization variable:
11()
min ||||.n P subject to y θ
θθΦ=
This can be formulated as a linear programming problem: let A be the n by 2m matrix []Φ-Φ. The linear program
()min1,0T n z
LP z subject to Az y x =≥.
has a solution *z , say, a vector in 2m
which can be partitioned as ***[]z u v =; then ***u v θ=-
solves 1()P . The reconstruction *1,ˆn x
θ=ψ. This linear program is typically considered computationally tractable. In fact, this problem has been studied in the signal analysis literature
under the name Basis Pursuit [7]; in that work, very large-scale underdetermined problems.
2、基追踪实现工具箱l1-MAGIC
若要谈基追踪方法的实现,就必须提到l1-MAGIC 工具箱(工具箱主页:/~justin/l1magic/),在工具箱主页有介绍:L1-MAGIC is a collection of MA TLAB routines for solving the convex optimization programs central to compressive sampling. The algorithms are based on standard interior-point methods, and are suitable for large-scale problems.
另外,该工具箱专门有一个说明文档《l1-magic: Recovery of Sparse Signals via Convex Programming 》,可以在工具箱主页下载。
该工具箱一共解决了七个问题,其中第一个问题即是Basis Pursuit : Min-1l with equality constraints. The problem 11()min ||||,P x subject to Ax b =
also known as basis pursuit, finds the vector with smallest 1l norm
1||||:||i i
x x =
∑ that explains the observations b . As the results in [4, 6] show, if a sufficiently sparse 0x exists
such that 0Ax b = then 1()P will find it. When ,,x A b have real-valued entries, 1()P can be recast as an LP (this is discussed in detail in [10]).
工具箱中给出了专门对(1P )的代码,使用方法可参见l1eq_example.m, 说明文档3.1节也进行了介绍。
在附录中,给出了将(1P )问题转化为线性规划问题的过程,但这个似乎并不怎么容易看明白:
3 如何将(P1)转化为线性规划问题?
尽管在l1-MAGIC 给出了一种基追踪的实现,但需要基于它的l1eq_pd.m 文件,既然基追踪是用线性规划求解,那么就应该可以用MATLAB 自带的linprog 函数求解,究竟该如何将(P1)转化为标准的线性规划问题呢?我们来看文献【3】的介绍:
3 Basis Pursuit
We now discuss our approach to the problem of overcomplete representations. We assume that the dictionary is overcomplete, so that there are in general many representations s γγγαφ=∑.
The principle of Basis Pursuit is to find a representation of the signal whose coefficients have minimal 1l norm. formally, one solves the problem
1min ||||a subject to a s Φ=. (3.1) From one point of view, (3.1) is very similar to the method of Frames (2.3): we are simply replacing the 2l norm in (2.3) with the 1l norm. however, this apparently slight change has major consequences. The method of Frames leads to a quadratic optimization problem with linear equality constraints, and so involves essentially just the solution of a system of linear equations. In contrast, Basis Pursuit requires the solutions of a convex, nonquadratic optimization problem, which involves considerably more effort and sophistication. 3.1 Linear Programming
To explain the last comment, and the name Basis Pursuit, we develop a connection with linear programming (LP).
The linear program in so-called standard form [7,16] is a constrained optimization problem defined in terms of a variable m x ∈ by
min ,0,T c x subject to Ax b x =≥ (3.2)
where T c x is the objective function, Ax b = is a collection of equality constraints, and 0x ≥ is a set of bounds. The main question is, which variables should be zero.
The Basis Pursuit problem (3.1) can be equivalently reformulated as a linear program in the standard form (3.2) by making the following translations:
2;(,);(1,1);(,);.m p x u v c A b s ⇔⇔⇔⇔Φ-Φ⇔ Hence, the solution of (3.4) can be obtained by solving an equivalent linear program. (The equivalent of minimum 1l optimizations with linear programming has been known since the 1950’s; see[2]). The connection between Basis Pursuit and linear programming is useful in several ways.
这里,文献【3】的转化说明跟文献【2】中4.1节的说明差不多,但对初学者来说仍然会有一定的困难,下面我们就以文献【3】中的符号为准来解读一下。
首先,式(3.1)中的变量a 没有非负约束,所以要将a 变为两个非负变量u 和v 的差a u v =-,由于u 可以大于也可以小于v ,所以a 可以是正的也可以是负的[4]。
也就是说,约束条件a s Φ=要变为()u v s Φ-=,而这个还可以写为[,][;]u v s Φ-Φ=,更清晰的写法如下:
[]u s v ⎡⎤⎡⎤
⎢⎥⎢⎥Φ-Φ=⎢⎥⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦
然后,根据范数的定义,目标函数可进一点写为:
1||||||||i i i i
i
a a u v ==-∑∑
目标函数中有绝对值,怎么去掉呢?这里得看一下文献【5】:
对L1norm 如何线性化的理解最主要的是要想明白为什么对单一元素的最小化,即min ||x 等价于以下的线性规划问题。
min ,0
y z
y z x y z +-=≥ 现在假设以上的线性规划问题的最优解00,y z ,并且000,0y z >>。
这个时候,总可以找到一个很小的正数α使得10100,0y y z z αα=-≥=-≥。
而对于11,y z 它们满足以上线性规划的所有约束,比如110000()y z y z y z x αα-=---=-=,但这组可行解却具有比00,y z 更小的目标函数值,即002y z α+-。
这就证明了00,y z 并不是最优解,从而导出矛盾。
所以这一般的结论就是对于以上的线性规划问题,其最优解必须满足要吗0y =,要吗0z =,从而其最优目标值要吗是x ,要吗是x -,即||x 。
现在推广到有限维度的向量1L norm 最小化,即11min ||||min ||n
i i x x ==∑。
它等价于以下的
线性规划问题
min ()
,0
c Y Z Y Z X Y Z +-=≥ 其中c 是1行n 列的矩阵,并且每个元素都是1。
到现在一切应该都清晰明白了,总结如下: 问题1
min ||||..,p s t s a ααΦ=∈ 可以转化为线性规划问题min ..,0T c x
s t Ax b x =≥,
其中,12[1,1,,1]T p c ⨯= ,2p x ∈ ,[,]A =Φ-Φ,b s =;求得最优化解0x 后可得变量a 的最优化解000(1:)(1:2)a x p x p p =-+。
4、基于linprog 的基追踪MATLAB 代码(BP_linprog.m) function [ alpha ] = BP_linprog( s,Phi )
%BP_linprog(Basis Pursuit with linprog) Summary of this function goes here %Version: 1.0 written by jbb0523 @2016-07-21
%Reference:Chen S S, Donoho D L, Saunders M A. Atomic decomposition by %basis pursuit[J]. SIAM review, 2001, 43(1): 129-159.(Available at:
%/viewdoc/download?doi=10.1.1.37.4272&rep=rep1&type=pdf) % Detailed explanation goes here
% s = Phi * alpha (alpha is a sparse vector) % Given s & Phi, try to derive alpha [s_rows,s_columns] = size(s); if s_rows<s_columns
s = s'; %s should be a column vector
end
p = size(Phi,2);
%according to section 3.1 of the reference
c = ones(2*p,1);
A = [Phi,-Phi];
b = s;
lb = zeros(2*p,1);
x0 = linprog(c,[],[],A,b,lb);
alpha = x0(1:p) - x0(p+1:2*p);
end
5、基追踪单次重构测试代码(CS_Reconstuction_Test.m)
测试代码与OMP测试单码相同,仅仅是修改了重构函数。
%压缩感知重构算法测试
clear all;close all;clc;
M = 64;%观测值个数
N = 256;%信号x的长度
K = 10;%信号x的稀疏度
Index_K = randperm(N);
x = zeros(N,1);
x(Index_K(1:K)) = 5*randn(K,1);%x为K稀疏的,且位置是随机的Psi = eye(N);%x本身是稀疏的,定义稀疏矩阵为单位阵x=Psi*theta Phi = randn(M,N);%测量矩阵为高斯矩阵
A = Phi * Psi;%传感矩阵
y = Phi * x;%得到观测向量y
%% 恢复重构信号x
tic
theta = BP_linprog(y,A);
x_r = Psi * theta;% x=Psi * theta
toc
%% 绘图
figure;
plot(x_r,'k.-');%绘出x的恢复信号
hold on;
plot(x,'r');%绘出原信号x
hold off;
legend('Recovery','Original')
fprintf('\n恢复残差:');
norm(x_r-x)%恢复残差。