A survey of sparse representation algorithms and applications
Sparse and Redundant Representation Modeling for Image Processing
Michael Elad
The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel Michal Aharon Guillermo Sapiro * Joint work with
3
Denoising By Energy Minimization
Many of the proposed denoising algorithms are related to the minimization of an energy function of the form
1 2 f(x) = x−y 2 2
y : Given measurements x : Unknown to be recovered
+ Pr(x)
Prior or regularization
Relation to measurements
This is in-fact a Bayesian point of view, adopting the Maximum-Aposteriori Probability (MAP) estimation. Clearly, the wisdom in such an approach is within the choice of the prior – modeling the images of interest.
Remove Additive Noise
?
2
Sparse Representations
Sparse RepresentationsTara N. SainathSLTC Newsletter, November 2010Sparse representations (SRs), including compressive sensing (CS), have gained popularity in the last few years as a technique used to reconstruct a signal from few training examples, a problem which arises in many machine learning applications. This reconstruction can be defined as adaptively finding a dictionary which best represents the signal on a per sample basis. This dictionary could include random projections, as is typically done for signal reconstruction, or actual training samples from the data, which is explored in many machine learning applications. SRs is a rapidly growing field with contributions in a variety of signal processing and machine learning conferences such as ICASSP, ICML and NIPS, and more recently in speech recognition. Recently, a special session on Sparse Representations took place at Interspeech 2010 in Makuhari, Japan from March 26-30, 2010. Below work from this special session is summarized in more detail.FACE RECOGNITION VIA COMPRESSIVE SENSINGYang et al. present a method for image-based robust face recognition using sparse representations [1]. Most state-of-the art face recognition systems suffer from limited abilities to handle image nuisances such as illumination, facial disguise, and pose misalignment. Motivated by work in compressive sensing, the described method finds the sparsest linear combination of a query image using all prior training images, where the dominant sparse coefficients reveal the identity of the query image. In addition, extensions of applying sparse representations for face recognition also address a wide range of problems in the field of face recognition, such as dimensionality reduction, image corruption, and face alignment. The paper also provides useful guidelines to practitioners working in similar fields, such as speech recognition.EXEMPLAR-BASED SPARSE REPRESENTATION FEATURESIn Sainath et al. [2], the authors explore the use of exemplar-based sparse representations (SRs) to map test features into the linear span of training examples. Specifically, given a test vector y and a set of exemplars from the training set, which are put into a dictionary H, y is represented as a linear combination of training examples by solving y = H&beta subject to a spareness constraint on &beta . The feature H&beta can be thought of as mapping test sample y back into the linear span of training examples in H.The authors show that the frame classification accuracy using SRs is higher than using a Gaussian Mixture Model (GMM), showing that not only do SRs move test features closer totraining, but also move the features closer to the correct class. A Hidden Markov Model (HMM) is trained on these new SR features and evaluated in a speech recognition task. On the TIMIT corpus, applying the SR features on top of our best discriminatively trained system allows for a 0.7% absolute reduction in phonetic error rate (PER). Furthermore, on a large vocabulary 50 hour broadcast news task, a reduction in word error rate (WER) of 0.3% absolute, demonstrating the benefit of these SR features for large vocabulary.OBSERVATION UNCERTAINTY MEASURES FOR SPARSE IMPUTATIONMissing data techniques are used to estimate clean speech features from noisy environments by finding reliable information in the noisy speech signal. Decoding is then performed based on either the reliable information alone or using both reliable and unreliable information where unreliable parts of the signal are reconstructed using missing data imputation prior to decoding. Sparse imputation (SI) is an exemplar-based reconstruction method which is based on representing segments of the noisy speech signal as linear combinations of as few as possible clean speech example segments.Decoding accuracy depends on several factors including the uncertainty in the speech segment. Gemmeke et al. propose various uncertainty measures to characterize the expected accuracy of a sparse imputation based missing data method [3]. In experiments on noisy large vocabulary speech data, using observation uncertainties derived from the proposed measures improved the speech recognition performance on features estimated with SI. Relative error reductions up to 15% compared to the baseline system using SI without uncertainties were achieved with the best measures.SPARSE AUTO-ASSOCIATIVE NEURAL NETWORKS: THEORY AND APPLICATION TO SPEECH RECOGNITIONGarimella et al. introduce a sparse auto-associative neural network (SAANN) in which the internal hidden layer output is forced to be sparse [4]. This is done by adding a sparse regularization term to the original reconstruction error cost function, and updating the parameters of the network to minimize the overall cost. The authors show the benefit of SAANN on the TIMIT phonetic recognition task. Specifically, a set of perceptual linear prediction (PLP) features are provided as input into the SAANN structure, and a set of sparse hidden layer outputs are produced and used as features. Experiments with the SAANN features on the TIMIT phoneme recognition system show a relative improvement in phoneme error rate of 5.1% over the baseline PLP features.DATA SELECTION FOR LANGUAGE MODELING USING SPARSE REPRESENTATIONSThe ability to adapt language models to specific domains from large generic text corpora is of considerable interest to the language modeling community. One of the key challenges is toidentify the text material relevant to a domain in the generic text collection. The text selection problem can be cast in a semi-supervised learning framework where the initial hypothesis from a speech recognition system is used to identify relevant training material. Sethy et al [5] present a novel sparse representation formulation which selects a sparse set of relevant sentences from the training data which match the test set distribution. In this formulation, the training sentences are treated as the columns of the sparse representation matrix and then-gram counts as the rows. The target vector is the n-gram probability distribution for the test data. A sparse solution to this problem formulation identifies a few columns which can best represent the target test vector, thus identifying the relevant set of sentences from the training data. Rescoring results with the language model built from the data selected using the proposed method yields modest gains on the English broadcast news RT-04 task, reducing the word error rate from 14.6% to 14.4%.SPARSE REPRESENTATIONS FOR TEXT CATEGORIZATIONGiven the superior performance of SRs compared to other classifiers for both image classification and phonetic classification, Sainath et al. extends the use of SRs for text classification [6], a method which has thus far not been explored for this domain. Specifically, Sainath et al. show how SRs can be used for text classification and how their performance varies with the vocabulary size of the documents. The research finds that the SR method offers promising results over the Naive Bayes (NB) classifier, a standard baseline classifier used for text categorization, thus introducing an alternative class of methods for text categorization.CONCLUSIONSThis article presented an overview about sparse representation research in the areas of face recognition, speech recognition, language modeling and text classification. For more information, please see:[1] A. Yang, Z. Zhou, Y. Ma and S. Shankar Sastry, "Towards a robust face recognition system using compressive sensing", in Proc. Interspeech, September 2010.[2] T. N. Sainath, B. Ramabhadran, D. Nahamoo, D. Kanevsky and A. Sethy,“ Exemplar-Based Sparse Representation Features for Speech Recognition ," in Proc. Interspeech, September 2010.[3] J. F. Gemmeke, U. Remes and K. J. Palomäki, "Observation Uncertainty Measures for Sparse Imputation", in Proc. Interspeech, September 2010.[4] G.S.V.S. Sivaram, S. Ganapathy and H. Hermansky, "Sparse Auto-associative Neural Networks: Theory and Application to Speech Recognition", in Proc. Interspeech, September 2010.[5] A. Sethy, T. N. Sainath, B. Ramabhadran and D. Kanevsky, “ Data Selection for Language Modeling Using Sparse Representations," in Proc. Interspeech, September 2010.[6] T. N. Sainath, S. Maskey, D. Kanevsky, B. Ramabhadran, D. Nahamoo and J. Hirschberg, “ Sparse Representations for Text Categorization," in Proc. Inte rspeech, September 2010.。
频域下稀疏表示的大数据库人脸分类算法
频域下稀疏表示的大数据库人脸分类算法胡业刚;任新悦;李培培;王汇源【摘要】人脸识别的识别率受众多因素影响,目前已有很多成形的高识别率算法,然而,随着数据库中人脸图像的增加,识别率下降很快。
鉴于该特点,采用频域下的稀疏表示分类算法能有效解决上述问题,先使用快速傅里叶变换(FFT)将人脸数据从时域变换到频域,再通过 l 1范数最优化稀疏表示算法,把所有训练样本作为基向量,稀疏表示出测试样本,最后使用最近邻子空间算法分类。
在扩展的 YaleB 人脸库中实验结果表明,该算法具有有效性。
%The recognition rate of face recognition is influenced by many factors, in which there are lots of effective algo-rithms, however, with the increase of face in the database, and the recognition rate will be decreased rapidly. In this situation, the sparse representation classification under the frequency domain can solve the above problems effectively. Firstly, the face image will be transformed from time domain to frequency domain using FFT algorithm, and then sparse representation about the test sample will be obtained by l1 norm optimization approach, in which all the training samples as the base vectors, in addition using the nearest neighbor subspace classification. Finally the experimental results show that the algorithm is effective in the extensional Yale B face database.【期刊名称】《阜阳师范学院学报(自然科学版)》【年(卷),期】2015(000)002【总页数】4页(P83-86)【关键词】稀疏表示;快速傅里叶变换;人脸识别【作者】胡业刚;任新悦;李培培;王汇源【作者单位】阜阳师范学院数学与统计学院,安徽阜阳 236037;阜阳师范学院数学与统计学院,安徽阜阳 236037;阜阳师范学院数学与统计学院,安徽阜阳236037;阜阳师范学院数学与统计学院,安徽阜阳 236037【正文语种】中文【中图分类】TP391.41 引言近年来,人脸识别已成为经典的模式识别研究问题之一。
A survey of Hirota's difference equations
SparseVB:变分贝叶斯算法为线性和逻辑回归的稀疏高维回归模型的稀疏变量选择说明书
Package‘sparsevb’October14,2022Type PackageTitle Spike-and-Slab Variational Bayes for Linear and LogisticRegressionVersion0.1.0Date2021-1-04Author Gabriel Clara[aut,cre],Botond Szabo[aut],Kolyan Ray[aut]Maintainer Gabriel Clara<*************************>Description Implements variational Bayesian algorithms to perform scalable variable selec-tion for sparse,high-dimensional linear and logistic regression models.Features in-clude a novel prioritized updating scheme,which uses a preliminary estimator of the varia-tional means during initialization to generate an updating order prioritizing large,more rele-vant,coefficients.Sparsity is induced via spike-and-slab priors with either Laplace or Gaus-sian slabs.By default,the heavier-tailed Laplace density is used.Formal derivations of the algo-rithms and asymptotic consistency results may be found in Kolyan Ray and Botond Sz-abo(2020)<doi:10.1080/01621459.2020.1847121>and Kolyan Ray,Botond Sz-abo,and Gabriel Clara(2020)<arXiv:2010.11665>.BugReports https:///gclara/varpack/-/issuesLicense GPL(>=3)Imports Rcpp(>=1.0.5),selectiveInference(>=1.2.5),glmnet(>=4.0-2),statsLinkingTo Rcpp,RcppArmadillo,RcppEnsmallenSystemRequirements C++11Encoding UTF-8RoxygenNote7.1.1NeedsCompilation yesRepository CRANDate/Publication2021-01-1509:20:02UTC12sparsevb-package R topics documented:sparsevb-package (2)svb.fit (3)Index6 sparsevb-package sparsevb:Spike-and-Slab Variational Bayes for Linear and LogisticRegressionDescriptionImplements variational Bayesian algorithms to perform scalable variable selection for sparse,high-dimensional linear and logistic regression models.Features include a novel prioritized updating scheme,which uses a preliminary estimator of the variational means during initialization to generate an updating order prioritizing large,more relevant,coefficients.Sparsity is induced via spike-and-slab priors with either Laplace or Gaussian slabs.By default,the heavier-tailed Laplace density is used.Formal derivations of the algorithms and asymptotic consistency results may be found in Kolyan Ray and Botond Szabo(2020)<doi:10.1080/01621459.2020.1847121>and Kolyan Ray, Botond Szabo,and Gabriel Clara(2020)<arXiv:2010.11665>.DetailsFor details as they pertain to using the package,consult the svb.fit function help page.Detailed descriptions and derivations of the variational algorithms with Laplace slabs may be found in the references.Author(s)Maintainer:Gabriel Clara<*************************>Authors:•Botond Szabo•Kolyan RayReferences•Ray K.and Szabo B.Variational Bayes for high-dimensional linear regression with sparse priors.(2020).Journal of the American Statistical Association.•Ray K.,Szabo B.,and Clara G.Spike and slab variational Bayes for high dimensional logistic regression.(2020).Advances in Neural Information Processing Systems33.See AlsoUseful links:•Report bugs at https:///gclara/varpack/-/issuessvb.fit Fit Approximate Posteriors to Sparse Linear and Logistic ModelsDescriptionMain function of the sparsevb putes mean-field posterior approximations for both linear and logistic regression models,including variable selection via sparsity-inducing spike and slab priors.Usagesvb.fit(X,Y,family=c("linear","logistic"),slab=c("laplace","gaussian"),mu,sigma=rep(1,ncol(X)),gamma,alpha,beta,prior_scale=1,update_order,intercept=FALSE,noise_sd,max_iter=1000,tol=1e-05)ArgumentsX A numeric design matrix,each row of which represents a vector of covari-ates/independent variables/features.Though not required,it is recommendedto center and scale the columns to have norm sqrt(nrow(X)).Y An nrow(X)-dimensional response vector,numeric if family="linear"andbinary if family="logistic".family A character string selecting the regression model,either"linear"or"logistic".slab A character string specifying the prior slab density,either"laplace"or"gaussian".mu An ncol(X)-dimensional numeric vector,serving as initial guess for the varia-tional means.If omitted,mu will be estimated via ridge regression to initializethe coordinate ascent algorithm.sigma A positive ncol(X)-dimensional numeric vector,serving as initial guess for thevariational standard deviations.gamma An ncol(X)-dimensional vector of probabilities,serving as initial guess forthe variational inclusion probabilities.If omitted,gamma will be estimated viaLASSO regression to initialize the coordinate ascent algorithm.alpha A positive numeric value,parametrizing the beta hyper-prior on the inclusion probabilities.If omitted,alpha will be chosen empirically via LASSO regres-sion.beta A positive numeric value,parametrizing the beta hyper-prior on the inclusion probabilities.If omitted,beta will be chosen empirically via LASSO regres-sion.prior_scale A numeric value,controlling the scale parameter of the prior slab ed as the scale parameterλwhen prior="laplace",or as the standard deviationσif prior="gaussian".update_order A permutation of1:ncol(X),giving the update order of the coordinate-ascent algorithm.If omitted,a data driven updating order is used,see Ray and Szabo(2020)in Journal of the American Statistical Association for details.intercept A Boolean variable,controlling if an intercept should be included.NB:This feature is still experimental in logistic regression.noise_sd A positive numerical value,serving as estimate for the residual noise standard deviation in linear regression.If missing it will be estimated,see estimateSigmafrom the selectiveInference package for more details.Has no effect whenfamily="logistic".max_iter A positive integer,controlling the maximum number of iterations for the varia-tional update loop.tol A small,positive numerical value,controlling the termination criterion for max-imum absolute differences between binary entropies of successive iterates. DetailsSupposeθis the p-dimensional true parameter.The spike-and-slab prior forθmay be represented by the hierarchical schemew∼Beta(α,β),z j|w∼i.i.d.Bernoulli(w),θj|z j∼ind.(1−z j)δ0+z j g.Here,δ0represents the Dirac measure at0.The slab g may be taken either as a Laplace(0,λ)or N(0,σ2)density.The former has centered densityfλ(x)=λ2e−λ|x|.Givenαandβ,the beta hyper-prior has densityb(x|α,β)=xα−1(1−x)β−1 1tα−1(1−t)β−1d t.A straightforward integration shows that the prior inclusion probability of a coefficient isαα+β. ValueThe approximate mean-field posterior,given as a named list containing numeric vectors"mu", "sigma","gamma",and a value"intercept".The latter is set to NA in case intercept=FALSE.In mathematical terms,the conditional distribution of eachθj is given byθj|µj,σj,γj∼ind.γj N(µj,σ2)+(1−γj)δ0.Examples###Simulate a linear regression problem of size n times p,with sparsity level s### n<-250p<-500s<-5###Generate toy data###X<-matrix(rnorm(n*p),n,p)#standard Gaussian design matrixtheta<-numeric(p)theta[sample.int(p,s)]<-runif(s,-3,3)#sample non-zero coefficients in random locations pos_TR<-as.numeric(theta!=0)#true positivesY<-X%*%theta+rnorm(n)#add standard Gaussian noise###Run the algorithm in linear mode with Laplace prior and prioritized initialization### test<-svb.fit(X,Y,family="linear")posterior_mean<-test$mu*test$gamma#approximate posterior meanpos<-as.numeric(test$gamma>0.5)#significant coefficients###Assess the quality of the posterior estimates###TPR<-sum(pos[which(pos_TR==1)])/sum(pos_TR)#True positive rateFDR<-sum(pos[which(pos_TR!=1)])/max(sum(pos),1)#False discovery rateL2<-sqrt(sum((posterior_mean-theta)^2))#L_2-errorMSPE<-sqrt(sum((X%*%posterior_mean-Y)^2)/n)#Mean squared prediction errorIndexsparsevb,3sparsevb(sparsevb-package),2sparsevb-package,2svb.fit,2,36。
Discriminatively Trained Sparse Code Gradients for Contour Detection
Discriminatively Trained Sparse Code Gradientsfor Contour DetectionXiaofeng Ren and Liefeng BoIntel Science and Technology Center for Pervasive Computing,Intel LabsSeattle,W A98195,USA{xiaofeng.ren,liefeng.bo}@AbstractFinding contours in natural images is a fundamental problem that serves as thebasis of many tasks such as image segmentation and object recognition.At thecore of contour detection technologies are a set of hand-designed gradient fea-tures,used by most approaches including the state-of-the-art Global Pb(gPb)operator.In this work,we show that contour detection accuracy can be signif-icantly improved by computing Sparse Code Gradients(SCG),which measurecontrast using patch representations automatically learned through sparse coding.We use K-SVD for dictionary learning and Orthogonal Matching Pursuit for com-puting sparse codes on oriented local neighborhoods,and apply multi-scale pool-ing and power transforms before classifying them with linear SVMs.By extract-ing rich representations from pixels and avoiding collapsing them prematurely,Sparse Code Gradients effectively learn how to measure local contrasts andfindcontours.We improve the F-measure metric on the BSDS500benchmark to0.74(up from0.71of gPb contours).Moreover,our learning approach can easily adaptto novel sensor data such as Kinect-style RGB-D cameras:Sparse Code Gradi-ents on depth maps and surface normals lead to promising contour detection usingdepth and depth+color,as verified on the NYU Depth Dataset.1IntroductionContour detection is a fundamental problem in vision.Accuratelyfinding both object boundaries and interior contours has far reaching implications for many vision tasks including segmentation,recog-nition and scene understanding.High-quality image segmentation has increasingly been relying on contour analysis,such as in the widely used system of Global Pb[2].Contours and segmentations have also seen extensive uses in shape matching and object recognition[8,9].Accuratelyfinding contours in natural images is a challenging problem and has been extensively studied.With the availability of datasets with human-marked groundtruth contours,a variety of approaches have been proposed and evaluated(see a summary in[2]),such as learning to clas-sify[17,20,16],contour grouping[23,31,12],multi-scale features[21,2],and hierarchical region analysis[2].Most of these approaches have one thing in common[17,23,31,21,12,2]:they are built on top of a set of gradient features[17]measuring local contrast of oriented discs,using chi-square distances of histograms of color and textons.Despite various efforts to use generic image features[5]or learn them[16],these hand-designed gradients are still widely used after a decade and support top-ranking algorithms on the Berkeley benchmarks[2].In this work,we demonstrate that contour detection can be vastly improved by replacing the hand-designed Pb gradients of[17]with rich representations that are automatically learned from data. We use sparse coding,in particularly Orthogonal Matching Pursuit[18]and K-SVD[1],to learn such representations on patches.Instead of a direct classification of patches[16],the sparse codes on the pixels are pooled over multi-scale half-discs for each orientation,in the spirit of the Pbimage patch: gray, abdepth patch (optional):depth, surface normal…local sparse coding multi-scale pooling oriented gradients power transformslinear SVM+ - …per-pixelsparse codes SVMSVMSVM … SVM RGB-(D) contoursFigure 1:We combine sparse coding and oriented gradients for contour analysis on color as well as depth images.Sparse coding automatically learns a rich representation of patches from data.With multi-scale pooling,oriented gradients efficiently capture local contrast and lead to much more accurate contour detection than those using hand-designed features including Global Pb (gPb)[2].gradients,before being classified with a linear SVM.The SVM outputs are then smoothed and non-max suppressed over orientations,as commonly done,to produce the final contours (see Fig.1).Our sparse code gradients (SCG)are much more effective in capturing local contour contrast than existing features.By only changing local features and keeping the smoothing and globalization parts fixed,we improve the F-measure on the BSDS500benchmark to 0.74(up from 0.71of gPb),a sub-stantial step toward human-level accuracy (see the precision-recall curves in Fig.4).Large improve-ments in accuracy are also observed on other datasets including MSRC2and PASCAL2008.More-over,our approach is built on unsupervised feature learning and can directly apply to novel sensor data such as RGB-D images from Kinect-style depth ing the NYU Depth dataset [27],we verify that our SCG approach combines the strengths of color and depth contour detection and outperforms an adaptation of gPb to RGB-D by a large margin.2Related WorkContour detection has a long history in computer vision as a fundamental building block.Modern approaches to contour detection are evaluated on datasets of natural images against human-marked groundtruth.The Pb work of Martin et.al.[17]combined a set of gradient features,using bright-ness,color and textons,to outperform the Canny edge detector on the Berkeley Benchmark (BSDS).Multi-scale versions of Pb were developed and found beneficial [21,2].Building on top of the Pb gradients,many approaches studied the globalization aspects,i.e.moving beyond local classifica-tion and enforcing consistency and continuity of contours.Ren et.al.developed CRF models on superpixels to learn junction types [23].Zhu ed circular embedding to enforce orderings of edgels [31].The gPb work of Arbelaez puted gradients on eigenvectors of the affinity graph and combined them with local cues [2].In addition to Pb gradients,Dollar et.al.[5]learned boosted trees on generic features such as gradients and Haar wavelets,Kokkinos used SIFT features on edgels [12],and Prasad et.al.[20]used raw pixels in class-specific settings.One closely related work was the discriminative sparse models of Mairal et al [16],which used K-SVD to represent multi-scale patches and had moderate success on the BSDS.A major difference of our work is the use of oriented gradients:comparing to directly classifying a patch,measuring contrast between oriented half-discs is a much easier problem and can be effectively learned.Sparse coding represents a signal by reconstructing it using a small set of basis functions.It has seen wide uses in vision,for example for faces [28]and recognition [29].Similar to deep network approaches [11,14],recent works tried to avoid feature engineering and employed sparse coding of image patches to learn features from “scratch”,for texture analysis [15]and object recognition [30,3].In particular,Orthogonal Matching Pursuit [18]is a greedy algorithm that incrementally finds sparse codes,and K-SVD is also efficient and popular for dictionary learning.Closely related to our work but on the different problem of recognition,Bo ed matching pursuit and K-SVD to learn features in a coding hierarchy [3]and are extending their approach to RGB-D data [4].Thanks to the mass production of Kinect,active RGB-D cameras became affordable and were quickly adopted in vision research and applications.The Kinect pose estimation of Shotton et. ed random forests to learn from a huge amount of data[25].Henry ed RGB-D cam-eras to scan large environments into3D models[10].RGB-D data were also studied in the context of object recognition[13]and scene labeling[27,22].In-depth studies of contour and segmentation problems for depth data are much in need given the fast growing interests in RGB-D perception.3Contour Detection using Sparse Code GradientsWe start by examining the processing pipeline of Global Pb(gPb)[2],a highly influential and widely used system for contour detection.The gPb contour detection has two stages:local contrast estimation at multiple scales,and globalization of the local cues using spectral grouping.The core of the approach lies within its use of local cues in oriented gradients.Originally developed in [17],this set of features use relatively simple pixel representations(histograms of brightness,color and textons)and similarity functions(chi-square distance,manually chosen),comparing to recent advances in using rich representations for high-level recognition(e.g.[11,29,30,3]).We set out to show that both the pixel representation and the aggregation of pixel information in local neighborhoods can be much improved and,to a large extent,learned from and adapted to input data. For pixel representation,in Section3.1we show how to use Orthogonal Matching Pursuit[18]and K-SVD[1],efficient sparse coding and dictionary learning algorithms that readily apply to low-level vision,to extract sparse codes at every pixel.This sparse coding approach can be viewed similar in spirit to the use offilterbanks but avoids manual choices and thus directly applies to the RGB-D data from Kinect.We show learned dictionaries for a number of channels that exhibit different characteristics:grayscale/luminance,chromaticity(ab),depth,and surface normal.In Section3.2we show how the pixel-level sparse codes can be integrated through multi-scale pool-ing into a rich representation of oriented local neighborhoods.By computing oriented gradients on this high dimensional representation and using a double power transform to code the features for linear classification,we show a linear SVM can be efficiently and effectively trained for each orientation to classify contour vs non-contour,yielding local contrast estimates that are much more accurate than the hand-designed features in gPb.3.1Local Sparse Representation of RGB-(D)PatchesK-SVD and Orthogonal Matching Pursuit.K-SVD[1]is a popular dictionary learning algorithm that generalizes K-Means and learns dictionaries of codewords from unsupervised data.Given a set of image patches Y=[y1,···,y n],K-SVD jointlyfinds a dictionary D=[d1,···,d m]and an associated sparse code matrix X=[x1,···,x n]by minimizing the reconstruction errorminY−DX 2F s.t.∀i, x i 0≤K;∀j, d j 2=1(1) D,Xwhere · F denotes the Frobenius norm,x i are the columns of X,the zero-norm · 0counts the non-zero entries in the sparse code x i,and K is a predefined sparsity level(number of non-zero en-tries).This optimization can be solved in an alternating manner.Given the dictionary D,optimizing the sparse code matrix X can be decoupled to sub-problems,each solved with Orthogonal Matching Pursuit(OMP)[18],a greedy algorithm forfinding sparse codes.Given the codes X,the dictionary D and its associated sparse coefficients are updated sequentially by singular value decomposition. For our purpose of representing local patches,the dictionary D has a small size(we use75for5x5 patches)and does not require a lot of sample patches,and it can be learned in a matter of minutes. Once the dictionary D is learned,we again use the Orthogonal Matching Pursuit(OMP)algorithm to compute sparse codes at every pixel.This can be efficiently done with convolution and a batch version of the OMP algorithm[24].For a typical BSDS image of resolution321x481,the sparse code extraction is efficient and takes1∼2seconds.Sparse Representation of RGB-D Data.One advantage of unsupervised dictionary learning is that it readily applies to novel sensor data,such as the color and depth frames from a Kinect-style RGB-D camera.We learn K-SVD dictionaries up to four channels of color and depth:grayscale for luminance,chromaticity ab for color in the Lab space,depth(distance to camera)and surface normal(3-dim).The learned dictionaries are visualized in Fig.2.These dictionaries are interesting(a)Grayscale (b)Chromaticity (ab)(c)Depth (d)Surface normal Figure 2:K-SVD dictionaries learned for four different channels:grayscale and chromaticity (in ab )for an RGB image (a,b),and depth and surface normal for a depth image (c,d).We use a fixed dictionary size of 75on 5x 5patches.The ab channel is visualized using a constant luminance of 50.The 3-dimensional surface normal (xyz)is visualized in RGB (i.e.blue for frontal-parallel surfaces).to look at and qualitatively distinctive:for example,the surface normal codewords tend to be more smooth due to flat surfaces,the depth codewords are also more smooth but with speckles,and the chromaticity codewords respect the opponent color pairs.The channels are coded separately.3.2Coding Multi-Scale Neighborhoods for Measuring ContrastMulti-Scale Pooling over Oriented Half-Discs.Over decades of research on contour detection and related topics,a number of fundamental observations have been made,repeatedly:(1)contrast is the key to differentiate contour vs non-contour;(2)orientation is important for respecting contour continuity;and (3)multi-scale is useful.We do not wish to throw out these principles.Instead,we seek to adopt these principles for our case of high dimensional representations with sparse codes.Each pixel is presented with sparse codes extracted from a small patch (5-by-5)around it.To aggre-gate pixel information,we use oriented half-discs as used in gPb (see an illustration in Fig.1).Each orientation is processed separately.For each orientation,at each pixel p and scale s ,we define two half-discs (rectangles)N a and N b of size s -by-(2s +1),on both sides of p ,rotated to that orienta-tion.For each half-disc N ,we use average pooling on non-zero entries (i.e.a hybrid of average and max pooling)to generate its representationF (N )= i ∈N |x i 1| i ∈N I |x i 1|>0,···, i ∈N |x im | i ∈NI |x im |>0 (2)where x ij is the j -th entry of the sparse code x i ,and I is the indicator function whether x ij is non-zero.We rotate the image (after sparse coding)and use integral images for fast computations (on both |x ij |and |x ij |>0,whose costs are independent of the size of N .For two oriented half-dics N a and N b at a scale s ,we compute a difference (gradient)vector DD (N a s ,N b s )= F (N a s )−F (N b s ) (3)where |·|is an element-wise absolute value operation.We divide D (N a s ,N b s )by their norms F (N a s ) + F (N b s ) + ,where is a positive number.Since the magnitude of sparse codes variesover a wide range due to local variations in illumination as well as occlusion,this step makes the appearance features robust to such variations and increases their discriminative power,as commonly done in both contour detection and object recognition.This value is not hard to set,and we find a value of =0.5is better than,for instance, =0.At this stage,one could train a classifier on D for each scale to convert it to a scalar value of contrast,which would resemble the chi-square distance function in gPb.Instead,we find that it is much better to avoid doing so separately at each scale,but combining multi-scale features in a joint representation,so as to allow interactions both between codewords and between scales.That is,our final representation of the contrast at a pixel p is the concatenation of sparse codes pooled at all thescales s ∈{1,···,S }(we use S =4):D p = D (N a 1,N b 1),···,D (N a S ,N b S );F (N a 1∪N b 1),···,F (N a S ∪N b S ) (4)In addition to difference D ,we also include a union term F (N a s ∪N b s ),which captures the appear-ance of the whole disc (union of the two half discs)and is normalized by F (N a s ) + F (N b s ) + .Double Power Transform and Linear Classifiers.The concatenated feature D p (non-negative)provides multi-scale contrast information for classifying whether p is a contour location for a partic-ular orientation.As D p is high dimensional (1200and above in our experiments)and we need to do it at every pixel and every orientation,we prefer using linear SVMs for both efficient testing as well as training.Directly learning a linear function on D p ,however,does not work very well.Instead,we apply a double power transformation to make the features more suitable for linear SVMs D p = D α1p ,D α2p (5)where 0<α1<α2<1.Empirically,we find that the double power transform works much better than either no transform or a single power transform α,as sometimes done in other classification contexts.Perronnin et.al.[19]provided an intuition why a power transform helps classification,which “re-normalizes”the distribution of the features into a more Gaussian form.One plausible intuition for a double power transform is that the optimal exponent αmay be different across feature dimensions.By putting two power transforms of D p together,we allow the classifier to pick its linear combination,different for each dimension,during the stage of supervised training.From Local Contrast to Global Contours.We intentionally only change the local contrast es-timation in gPb and keep the other steps fixed.These steps include:(1)the Savitzky-Goley filter to smooth responses and find peak locations;(2)non-max suppression over orientations;and (3)optionally,we apply the globalization step in gPb that computes a spectral gradient from the local gradients and then linearly combines the spectral gradient with the local ones.A sigmoid transform step is needed to convert the SVM outputs on D p before computing spectral gradients.4ExperimentsWe use the evaluation framework of,and extensively compare to,the publicly available Global Pb (gPb)system [2],widely used as the state of the art for contour detection 1.All the results reported on gPb are from running the gPb contour detection and evaluation codes (with default parameters),and accuracies are verified against the published results in [2].The gPb evaluation includes a number of criteria,including precision-recall (P/R)curves from contour matching (Fig.4),F-measures computed from P/R (Table 1,2,3)with a fixed contour threshold (ODS)or per-image thresholds (OIS),as well as average precisions (AP)from the P/R curves.Benchmark Datasets.The main dataset we use is the BSDS500benchmark [2],an extension of the original BSDS300benchmark and commonly used for contour evaluation.It includes 500natural images of roughly resolution 321x 481,including 200for training,100for validation,and 200for testing.We conduct both color and grayscale experiments (where we convert the BSDS500images to grayscale and retain the groundtruth).In addition,we also use the MSRC2and PASCAL2008segmentation datasets [26,6],as done in the gPb work [2].The MSRC2dataset has 591images of resolution 200x 300;we randomly choose half for training and half for testing.The PASCAL2008dataset includes 1023images in its training and validation sets,roughly of resolution 350x 500.We randomly choose half for training and half for testing.For RGB-D contour detection,we use the NYU Depth dataset (v2)[27],which includes 1449pairs of color and depth frames of resolution 480x 640,with groundtruth semantic regions.We choose 60%images for training and 40%for testing,as in its scene labeling setup.The Kinect images are of lower quality than BSDS,and we resize the frames to 240x 320in our experiments.Training Sparse Code Gradients.Given sparse codes from K-SVD and Orthogonal Matching Pur-suit,we train the Sparse Code Gradients classifiers,one linear SVM per orientation,from sampled locations.For positive data,we sample groundtruth contour locations and estimate the orientations at these locations using groundtruth.For negative data,locations and orientations are random.We subtract the mean from the patches in each data channel.For BSDS500,we typically have 1.5to 21In this work we focus on contour detection and do not address how to derive segmentations from contours.pooling disc size (pixel)a v e r a g e p r e c i s i o na v e r a g e p r e c i s i o nsparsity level a v e r a g e p r e c i s i o n (a)(b)(c)Figure 3:Analysis of our sparse code gradients,using average precision of classification on sampled boundaries.(a)The effect of single-scale vs multi-scale pooling (accumulated from the smallest).(b)Accuracy increasing with dictionary size,for four orientation channels.(c)The effect of the sparsity level K,which exhibits different behavior for grayscale and chromaticity.BSDS500ODS OIS AP l o c a l gPb (gray).67.69.68SCG (gray).69.71.71gPb (color).70.72.71SCG (color).72.74.75g l o b a l gPb (gray).69.71.67SCG (gray).71.73.74gPb (color).71.74.72SCG (color).74.76.77Table 1:F-measure evaluation on the BSDS500benchmark [2],comparing to gPb on grayscaleand color images,both for local contour detec-tion as well as for global detection (-bined with the spectral gradient analysis in [2]).Recall P r e c i s i o n Figure 4:Precision-recall curves of SCG vs gPb on BSDS500,for grayscale and color images.We make a substantial step beyondthe current state of the art toward reachinghuman-level accuracy (green dot).million data points.We use 4spatial scales,at half-disc sizes 2,4,7,25.For a dictionary size of 75and 4scales,the feature length for one data channel is 1200.For full RGB-D data,the dimension is 4800.For BSDS500,we train only using the 200training images.We modify liblinear [7]to take dense matrices (features are dense after pooling)and single-precision floats.Looking under the Hood.We empirically analyze a number of settings in our Sparse Code Gradi-ents.In particular,we want to understand how the choices in the local sparse coding affect contour classification.Fig.3shows the effects of multi-scale pooling,dictionary size,and sparsity level (K).The numbers reported are intermediate results,namely the mean of average precision of four oriented gradient classifier (0,45,90,135degrees)on sampled locations (grayscale unless otherwise noted,on validation).As a reference,the average precision of gPb on this task is 0.878.For multi-scale pooling,the single best scale for the half-disc filter is about 4x 8,consistent with the settings in gPb.For accumulated scales (using all the scales from the smallest up to the current level),the accuracy continues to increase and does not seem to be saturated,suggesting the use of larger scales.The dictionary size has a minor impact,and there is a small (yet observable)benefit to use dictionaries larger than 75,particularly for diagonal orientations (45-and 135-deg).The sparsity level K is a more intriguing issue.In Fig.3(c),we see that for grayscale only,K =1(normalized nearest neighbor)does quite well;on the other hand,color needs a larger K ,possibly because ab is a nonlinear space.When combining grayscale and color,it seems that we want K to be at least 3.It also varies with orientation:horizontal and vertical edges require a smaller K than diagonal edges.(If using K =1,our final F-measure on BSDS500is 0.730.)We also empirically evaluate the double power transform vs single power transform vs no transform.With no transform,the average precision is 0.865.With a single power transform,the best choice of the exponent is around 0.4,with average precision 0.884.A double power transform (with exponentsMSRC2ODS OIS APgPb.37.39.22SCG.43.43.33PASCAL2008ODS OIS APgPb.34.38.20SCG.37.41.27Table2:F-measure evaluation comparing our SCG approach to gPb on two addi-tional image datasets with contour groundtruth: MSRC2[26]and PASCAL2008[6].RGB-D(NYU v2)ODS OIS AP gPb(color).51.52.37 SCG(color).55.57.46gPb(depth).44.46.28SCG(depth).53.54.45gPb(RGB-D).53.54.40SCG(RGB-D).62.63.54Table3:F-measure evaluation on RGB-D con-tour detection using the NYU dataset(v2)[27].We compare to gPb on using color image only,depth only,as well as color+depth.Figure5:Examples from the BSDS500dataset[2].(Top)Image;(Middle)gPb output;(Bottom) SCG output(this work).Our SCG operator learns to preservefine details(e.g.windmills,faces,fish fins)while at the same time achieving higher precision on large-scale contours(e.g.back of zebras). (Contours are shown in double width for the sake of visualization.)0.25and0.75,which can be computed through sqrt)improves the average precision to0.900,which translates to a large improvement in contour detection accuracy.Image Benchmarking Results.In Table1and Fig.4we show the precision-recall of our Sparse Code Gradients vs gPb on the BSDS500benchmark.We conduct four sets of experiments,using color or grayscale images,with or without the globalization component(for which we use exactly the same setup as in gPb).Using Sparse Code Gradients leads to a significant improvement in accuracy in all four cases.The local version of our SCG operator,i.e.only using local contrast,is already better(F=0.72)than gPb with globalization(F=0.71).The full version,local SCG plus spectral gradient(computed from local SCG),reaches an F-measure of0.739,a large step forward from gPb,as seen in the precision-recall curves in Fig.4.On BSDS300,our F-measure is0.715. We observe that SCG seems to pick upfine-scale details much better than gPb,hence the much higher recall rate,while maintaining higher precision over the entire range.This can be seen in the examples shown in Fig.5.While our scale range is similar to that of gPb,the multi-scale pooling scheme allows theflexibility of learning the balance of scales separately for each code word,which may help detecting the details.The supplemental material contains more comparison examples.In Table2we show the benchmarking results for two additional datasets,MSRC2and PAS-CAL2008.Again we observe large improvements in accuracy,in spite of the somewhat different natures of the scenes in these datasets.The improvement on MSRC2is much larger,partly because the images are smaller,hence the contours are smaller in scale and may be over-smoothed in gPb. As for computational cost,using integral images,local SCG takes∼100seconds to compute on a single-thread Intel Core i5-2500CPU on a BSDS image.It is slower than but comparable to the highly optimized multi-thread C++implementation of gPb(∼60seconds).Figure6:Examples of RGB-D contour detection on the NYU dataset(v2)[27].Thefive panels are:input image,input depth,image-only contours,depth-only contours,and color+depth contours. Color is good picking up details such as photos on the wall,and depth is useful where color is uniform(e.g.corner of a room,row1)or illumination is poor(e.g.chair,row2).RGB-D Contour Detection.We use the second version of the NYU Depth Dataset[27],which has higher quality groundtruth than thefirst version.A medianfiltering is applied to remove double contours(boundaries from two adjacent regions)within3pixels.For RGB-D baseline,we use a simple adaptation of gPb:the depth values are in meters and used directly as a grayscale image in gPb gradient computation.We use a linear combination to put(soft)color and depth gradients together in gPb before non-max suppression,with the weight set from validation.Table3lists the precision-recall evaluations of SCG vs gPb for RGB-D contour detection.All the SCG settings(such as scales and dictionary sizes)are kept the same as for BSDS.SCG again outperforms gPb in all the cases.In particular,we are much better for depth-only contours,for which gPb is not designed.Our approach learns the low-level representations of depth data fully automatically and does not require any manual tweaking.We also achieve a much larger boost by combining color and depth,demonstrating that color and depth channels contain complementary information and are both critical for RGB-D contour detection.Qualitatively,it is easy to see that RGB-D combines the strengths of color and depth and is a promising direction for contour and segmentation tasks and indoor scene analysis in general[22].Fig.6shows a few examples of RGB-D contours from our SCG operator.There are plenty of such cases where color alone or depth alone would fail to extract contours for meaningful parts of the scenes,and color+depth would succeed. 5DiscussionsIn this work we successfully showed how to learn and code local representations to extract contours in natural images.Our approach combined the proven concept of oriented gradients with powerful representations that are automatically learned through sparse coding.Sparse Code Gradients(SCG) performed significantly better than hand-designed features that were in use for a decade,and pushed contour detection much closer to human-level accuracy as illustrated on the BSDS500benchmark. Comparing to hand-designed features(e.g.Global Pb[2]),we maintain the high dimensional rep-resentation from pooling oriented neighborhoods and do not collapse them prematurely(such as computing chi-square distance at each scale).This passes a richer set of information into learn-ing contour classification,where a double power transform effectively codes the features for linear paring to previous learning approaches(e.g.discriminative dictionaries in[16]),our uses of multi-scale pooling and oriented gradients lead to much higher classification accuracies. Our work opens up future possibilities for learning contour detection and segmentation.As we il-lustrated,there is a lot of information locally that is waiting to be extracted,and a learning approach such as sparse coding provides a principled way to do so,where rich representations can be automat-ically constructed and adapted.This is particularly important for novel sensor data such as RGB-D, for which we have less understanding but increasingly more need.。
基于压缩感知的稀疏重构DOA估计算法
基于压缩感知的稀疏重构DOA估计算法包晓蕾;曲行根;王卓英【摘要】基于空间目标分布的稀疏特性和压缩感知理论思想,提出一种基于奇异值分解的多测量梯度投影稀疏重构( SVD-MGPSR)算法,将多目标DOA估计转化为一个稀疏信号重构问题。
首先利用阵列流形建立的过完备原子库对信号进行联合稀疏表示,然后对压缩采样后的信息矩阵进行奇异值分解,可以明显降低运算量,最后基于MGPSR算法对稀疏信号进行重构,从而实现DOA估计。
相对于已有算法,该算法不仅在低信噪比、小快拍数条件下测向均方误差较小,而且能够对相干信号进行正确估计,具有较高的测向精度和角度分辨率。
仿真实验验证了该算法的有效性。
%Based on the sparse property of the spatial targets distribution and the idea of compressive sensing ( CS) theory, a multi-measurement gradient projection for sparse reconstruction algorithm based on singular value decomposition ( SVD) was proposed, in which a multi-targets DOA estimation problem can be translated into a sparse signal reconstruction problem .First-ly, the signal was joint sparse representation by establishing an over -complete atom dictionary according to array manifold ma-trix.Then, SVD of information matrix of compressive sampling was done to reduce the amount of computation greatly .Finally, the sparse signal was reconstructed based on multi -measurement vectors gradient projection for sparse signal reconstruction algo -rithm so as to achieve DOA estimation .Compared with existing algorithm , the proposed algorithm not only has a smaller mean square error in the low SNR but also is able to correctly estimate the coherentsignal .Moreover, it offers higher direction finding precision and angular resolution .The simulation results verify its effectiveness .【期刊名称】《武汉理工大学学报(信息与管理工程版)》【年(卷),期】2015(000)006【总页数】6页(P827-831,864)【关键词】压缩感知;DOA估计;奇异值分解;梯度投影【作者】包晓蕾;曲行根;王卓英【作者单位】上海电子信息职业技术学院通信与信息工程系,上海 201411;哈尔滨工程大学信息与通信工程学院,黑龙江哈尔滨 150001;上海电子信息职业技术学院通信与信息工程系,上海 201411【正文语种】中文【中图分类】TN911目标信号波达方向(direction of arrival, DOA)估计一直是阵列信号处理领域中的一个重要研究方向,在通信、雷达和医学成像等领域有着广泛的应用[1]。
CVPR2013总结
CVPR2013总结前不久的结果出来了,⾸先恭喜我⼀个已经毕业⼯作的师弟中了⼀篇。
完整的⽂章列表已经在CVPR的主页上公布了(),今天把其中⼀些感兴趣的整理⼀下,虽然论⽂下载的链接⼤部分还都没出来,不过可以follow最新动态。
等下载链接出来的时候⼀⼀补上。
由于没有下载链接,所以只能通过题⽬和作者估计⼀下论⽂的内容。
难免有偏差,等看了论⽂以后再修正。
显著性Saliency Aggregation: A Data-driven Approach Long Mai, Yuzhen Niu, Feng Liu 现在还没有搜到相关的资料,应该是多线索的⾃适应融合来进⾏显著性检测的PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors Keyang Shi, Keze Wang, Jiangbo Lu, Liang Lin 这⾥的两个线索看起来都不新,应该是集成框架⽐较好。
⽽且像素级的,估计能达到分割或者matting的效果Looking Beyond the Image: Unsupervised Learning for Object Saliency and Detection Parthipan Siva, Chris Russell, Tao Xiang, 基于学习的的显著性检测Learning video saliency from human gaze using candidate selection , Dan Goldman, Eli Shechtman, Lihi Zelnik-Manor这是⼀个做视频显著性的,估计是选择显著的视频⽬标Hierarchical Saliency Detection Qiong Yan, Li Xu, Jianping Shi, Jiaya Jia的学⽣也开始做显著性了,多尺度的⽅法Saliency Detection via Graph-Based Manifold Ranking Chuan Yang, Lihe Zhang, Huchuan Lu, Ming-Hsuan Yang, Xiang Ruan这个应该是扩展了那个经典的 graph based saliency,应该是⽤到了显著性传播的技巧Salient object detection: a discriminative regional feature integration approach , Jingdong Wang, Zejian Yuan, , Nanning Zheng⼀个多特征⾃适应融合的显著性检测⽅法Submodular Salient Region Detection , Larry Davis⼜是⼤⽜下⾯的⽂章,提法也很新颖,⽤了submodular。
Sparse Representation
7
Notice: Our Task is Impossible
min 0 s.t. D x 2
Here is a recipe for solving this problem: Set L=1 There are (K L) such supports Gather all the supports {Si}i of cardinality L
?
Clearly, it might happen that
ˆ eventually
0 0
0.
14
0
Uniqueness Rule
Suppose this problem has been approximated somehow and we obtain a solution
ˆ Argmin 0 s.t. x D
Lets search for the sparsest solution of
As p 0 we get a count of the non-zeros in the vector k 0 p p
x D
2 2
f x xp
p
1
p
1
p p
1
p0
p 0j
j 1
Next steps: given the previously found atoms, find the next one to best fit the residual.
The algorithm stops when the error D x 2 is below the destination threshold.
一种求解正交约束问题的投影梯度方法
一种求解正交约束问题的投影梯度方法童谣;丁卫平【摘要】The orthogonality constrained problems has wide applications in eigenvalue problems, sparse principal component analysis, etc. However, it is challenging to solve orthogonality constrained problems due to the non-convexity of the equality constraint. This paper proposes a projection gradient method using Gram-Schmidt process to deal with the orthogonality constraint. The time complexity is bounded by O ( r2 n), which is lower than the classical SVD. Some primary numerical results verified the validity of the proposed method.%摘正交约束优化问题在特征值问题、稀疏主成分分析等方面有广泛的应用。
由于正交约束的非凸性,精确求解该类问题具有一定的困难。
本文提出了一种求解正交约束优化问题的投影梯度算法。
该算法采用施密特标准正交化方法处理正交约束,其时间复杂度为 O ( r2 n),比传统 SVD 分解复杂度低,且实现简单。
数值实验验证了算法的有效性。
【期刊名称】《湖南理工学院学报(自然科学版)》【年(卷),期】2015(000)002【总页数】5页(P5-9)【关键词】正交约束优化;投影梯度算法;邻近点算法;施密特标准正交化【作者】童谣;丁卫平【作者单位】福州大学数学与计算机科学学院,福州 350108;湖南理工学院数学学院,湖南岳阳 414006【正文语种】中文【中图分类】O224正交约束优化模型在科学与工程计算相关领域有广泛应用, 譬如: 线性和非线性特征值问题[1,2], 组合优化问题[3,4], 稀疏主成分分析问题[5,6], 人脸识别[7], 基因表达数据分析[8], 保角几何[10,11], 1-比特压缩传感[12~14], p-调和流[15~18], 等等, 都离不开正交约束优化模型.一般地, 正交约束优化问题有如下形式:其中F( X)是ℝn×r→ℝ的可微函数, Q是对称正定阵, I是r×r单位阵, n≥r. 由于Q是对称正定的, 可设Q=LT L. 令Y=LX, 则(1)可转化为:线性约束优化问题的求解技术已经比较成熟, 为了简化问题(2)的形式, 我们主要考虑求解如下正交约束优化问题:由于正交约束的非凸性, 精确求解问题(1)或(3)具有一定的挑战. 目前为止, 还没有有效的算法可以保证获取这类问题的全局最优解(除了某些特殊情况, 如: 寻找极端特征值). 由于保持正交约束可行性的计算代价太大, 为了避免直接处理非线性约束, 人们提出了很多方法, 将带约束的优化问题转化成无约束的优化问题求解. 这些方法中, 最常用的有罚函数方法[21,22]和增广拉格朗日方法[19,20].罚函数方法将正交约束违背作为惩罚项添加到目标函数中, 把约束优化问题(3)转化为如下无约束优化问题:其中ρ>0为罚参数. 当罚参数趋于无穷大时, 罚问题(4)与原问题(3)等价. 为了克服这个缺陷, 人们引入了标准增广拉格朗日方法. Wen和Yang等[23]提出用Lagrange方法求解问题并证明了算法收敛于问题的可行解(在正则条件下, 收敛到平衡点). 最近, Manton [24,25] 等提出了解决正交约束问题的Stiefel manifold 结构方法: Osher [26] 等提出一种基于Bregman迭代的SOC算法. SOC算法结合算子分裂与Bregman迭代方法, 将正交约束问题转化为交替求解一个无约束优化问题和一个具有解析解的二次约束优化问题, 该方法获得了不错的数值实验效果. SOC算法在处理矩阵正交约束的子问题时,使用了传统的SVD分解, 其时间复杂度为O( n3).在本文中, 我们提出一种新的处理正交约束的算法, 该算法计算复杂性比传统的SVD分解要低. 根据问题(3)约束条件的特殊性, 我们将问题求解过程分解为两步: 第一步, 采用邻近点算法求解松弛的无约束优化问题, 得到预测点; 第二步, 将预测点投影到正交约束闭子集上. 基本的数值结果说明了这种正交闭子集投影梯度算法优越于经典增广Lagrange算法.本节给出求解正交约束优化问题的正交闭子集上的投影梯度算法(简记为POPGM). 该方法分为两步: 首先, 采用邻近点算法求解松弛的无约束优化问题, 得到预测点; 然后, 将预测点投影到正交闭子集上, 其中投影算子是一个简单的斯密特标准正交化过程. 为此, 我们先简要介绍邻近点算法.1.1 经典邻近点算法求解无约束优化问题的方法有很多, 包括: 最速下降法, Barzilai-Borwein method[30], 外梯度方法[31],等等. 这里, 我们介绍一种有效的求解算法, 邻近点算法(Proximal Point Algorithm, 简记为PPA)[27,28]. 最初, Rockafellar等[32]提出了求解变分不等式问题的PPA算法. 对于抽象约束优化问题:1.2 投影梯度算法现在给出本文提出的邻近点正交约束投影梯度算法(POPGM):Step 0. 给定初始参数r0>0, v=0.95, 初始点X0∈Ω, 给定ε>0, ρ>1, 令k=0. 注: 子问题(10)等价于求解如下单调变分不等式变分不等式(12)可采用下述显示投影来获得逼近解:由于(11)和(13)均有显示表达式, 可知和Xk都是易于求解的. 另外, 由于r<<n, 与传统SVD分解方法的时间复杂度O( n3)相比, 本文所提出的在正交约束闭子集上投影梯度法的计算时间花费更少, 这是因为(11)式处理正交约束的时间复杂度仅需O( r2 n).本节通过实例来说明POPGM算法的有效性. 实验测试环境为Win7系统,Intel(R)Core i3, CPU .20GHz, RAM 2.0GB, 编程软件为MATLAB R2012b.测试问题及数据取自Yin0. 给定对称矩阵A∈ℝn×n , 和酉矩阵V∈ℝn×r , 当V 是前r个最大特征值所对应特征空间的一组正交基时, 函数Trace(VT AV)达到最大值. 该问题可以考虑为求解如下正交约束优化问题:其中λ1≥λ2≥…≥λr 是我们要提取A的r个最大的特征值, A∈ℝn×n 为对称正定矩阵.实验数据:, 其中, 即中的元素服从均匀分布.实验参数:,ρ=1.6, ε=1.0e-5.初始点: X0=randn(n, r), X0=orth(X0).终止条件: .下面采用三种算法求解上述问题, 分别是本文的POPGM算法, Yin0的algorithm 2(简记为Yin’s Algo.)与MATLAB工具包中的“eigs”函数. 表中的FP/FY/FE 分别表示通过运行POPGM、Yin’s Algo和Eigs所求得的r个最大特征值之和, 即目标函数值; win表示两种算法对比, 所获得的目标函数值之差; err表示可行性误差, 即: e.表1给出了对于固定r=6, n 在500到5000之间变化时, 三种算法在求解问题的迭代次数(iter)与CPU时间(cput)的对比结果. 由表1可知, POPGM迭代次数受矩阵维数影响不大. 随着矩阵维数的增大, POPGM算法与Yin’s Algo.相比, 当n≤2000时, POPGM不仅时间上有优势, 而且提取效果也较好(win>0); n≥3000时, POPGM时间花费略多, 但提取效果有明显优势. POPGM与“eigs”相比, 随着维数n的增大, 时间优势逐渐变大, 但提取变量的解释能力也逐渐减弱. 由实验结果可知, 当矩阵维数n较大时, POPGM有较好的表现.表2列出了固定n=3000, 提取特征值的个数r在1到23之间变化时POPGM的运行结果. 由表2可以看出, 当r越小, POPGM计算花费时间越少; 随着r增大, FP 增大, 时间花费也在增大; 当r取5到7时, 花费时间合适, 且提取效果较好.表3列出了固定提取r=6, 将POPGM算法框架中的正交化过程替换成SVD分解, 对比两种处理正交约束方法的求解结果. 由表3可知, 在POPGM算法框架下, 在正交约束闭子集上的投影算子比传统的SVD分解在运算时间上要节约很多; 同时, 两种方法所提取的特征之和保持一致, 不随维数变化而变化,时间优势随矩阵维数增大而增大. 可见, 本文提出的处理正交约束的方法非常有效.本文研究求解一类正交约束优化问题的快速算法. 结合邻近点算法和施密特标准正交化过程, 本文提出了基于邻近点算法的非精确投影梯度算法, 算法采用邻近点算法求解松弛的无约束优化问题, 得到预测点; 然后, 将预测点投影到正交约束闭子集上. 与传统的增广拉格朗日法、罚函数方法的主要区别在于POPGM在每一步迭代中通过在正交约束集上投影得到迭代解, 并且避免使用SVD分解, 加快了算法的运行速度. 数值实验说明本文提出的POPGM有较好的综合表现.【相关文献】[1] Edelman A., As T., Arias A., Smith T., et al. The geometry of algorithms with orthogonality constraints [J]. SIAM J. Matrix Anal. Appl., 1998, 20 (2): 303~353[2] Caboussat A., Glowinski R., Pons V. An augmented lagrangian approach to the numerical solution of a non-smooth eigenvalue problem [J]. J. Numer. Math., 2009, 17 (1): 3~26[3] Burkard R. E., Karisch S. E., Rendl F. Qaplib-a quadratic assignment problem library [J]. J. Glob. Optim., 1997, 10 (4): 291~403[4] Loiola E. M., de Arbreu N. M. M., Boaventura –Netto P. O., et al. A survey for thequadratic assignment problem[J]. Eur. J. Oper. Res., 2007, 176 (2): 657~690[5] Lu Z. S., Zhang Y. An augmented lagrangian approach for sparse principal component analysis[J]. Math. Program., Ser. A., 2012, 135: 149~193[6] Shen H., Huang J. Z. Sparse principal component analysis via regularized low rank matrix approximation[J]. J. Multivar. Anal., 2008, 99 (6): 1015~1034[7] Hancock P. Burton A., Bruce V. Face processing: human perception and principal components analysis[J]. Memory Cogn., 1996, 24: 26~40[8] Botstein D. Gene shavingas a method for identifying distinct sets of genes with similar expression patterns[J]. Genme Bil., 2000, 1: 1~21[9] Wen Z., Yin W. T. A feasible method for optimization with orthogonality constraints[J]. Math. Program., 2013, 143(1-2): 397~434[10] Gu X., Yau S. Computing conformal structures of surfaces[J]. Commun. Inf. Syst., 2002, 2 (2): 121~146[11] Gu X., Yau S. T. Global conformal surface parameterization[C]. In Symposium on Geometry Processing, 2003: 127~137[12] Boufounos P. T., Baraniuk R. G. 1-bit compressive sensing [C]. In Conference on information Sciences and Systems (CISS), IEEE, 2008: 16~21[13] Yan M., Yang Y., Osher S. Robust 1-bit compressive sensing using adaptive outlier pursuit [J]. IEEE Trans, Signal Process, 2012, 60 (7): 3868~3875[14] Laska J. N., Wen Z., Yin W., Baraniuk R. G. Trust, but verify: fast and accurate signal recovery from 1-bit compressive measurements [J]. IEEE Trans. Signal Process, 2011, 59 (11): 5289[15] Chan T. F., Shen J. Variational restoration of nonflat image features: models and algorithms [J]. SIAM J. Appl. Math., 2000, 61: 1338~1361[16] Tang B., Sapiro G., Caselles V. Diffusion of general data on non-flat manifolds via harmonic maps theory: the direction diffusion case [J]. Int. J. Comput. Vis., 2000, 36: 149~161[17] Vese L. A., Osher S. Numerical method for p-harmonic flows and applications to image processing[J]. SIAM J. Numer. Anal., 2002, 40 (6): 2085~2104[18] Goldfarb D., Wen Z., Yin W. A curvilinear search method for the p-harmonic flow on spheres [J]. SIAM J. Imaging Sci., 2009, 2: 84~109[19] Glowinski R., Le Tallec P. Augmented Lagrangian and operator splitting methods in nonlinear mechanics [J]. SIAM Studies in Applied Mathematics, Society for Industrial and Applied Mathematics(SIAM), Philadelphia, PA, 1989, 9[20] Fortin M., Glowinski R. Augmented Lagrangian methods: applications to the numerical solution of boundary-value problems [J]. North Holland, 2000, 15[21] Nocedal J., Wright S. J. Numerical Optimization[M]. Springer, New York, 2006[22] Brthuel F., Brezis H., Helein F. Asymptotics for the minimization of a ginzburg-landau functional [J]. Calc. Var. Partial. Differ. Equ., 1993, 1 (2): 123~148[23] Wen Z., Yang C., Liu X. Trace-penalty minimization for large-scale eigenspace computation [J]. J. Scientific Comput., to appear[24] Manton J. H. Optimization algorithms exploiting unitary constraints [J]. IEEE Trans. Signal Process, 2002, 50 (3): 635~650[25] Absil P. -A., Mahony R., Sepulchre R. Optimization algorithms on matrix manifolds [M]. Princeton University Press, Princeton, 2008[26] Lai R., Osher S. A splitting method for orthogonality constrained problem [J]. J Sci Comput., 2014, 58 (2): 431~449[27] He B. S., Fu X. L. and Jiang Z. K. Proximal point algorithm using a linear proximal term [J]. J. Optim. Theory Appl., 2009, 141: 209~239[28] He B. S., Yuan X. M. Convergence analysis of primal-dual algorithms for a saddle-point problem: From contraction perspective [J]. SIAM J. Imaging Sci., 2012, 5: 1119~149 [29] Barzilai J., Borwein J. M. Two-point step size gradient methods [J]. IMA J. Numer. Anal., 1988, 8: 141~148[30] Korpelevich G. M. The extragradient method for finding saddle points and other problems [J]. Ekonomika Matematicheskie Metody, 1976, 12: 747~756[31] Rockafellar R. T. Monotone operators and the proximal point algorithms [J]. SIAM J. Cont. Optim., 1976, 14: 877~898。
计量经济学中英文词汇对照
Controlled experiments Conventional depth Convolution Corrected factor Corrected mean Correction coefficient Correctness Correlation coefficient Correlation index Correspondence Counting Counts Covaห้องสมุดไป่ตู้iance Covariant Cox Regression Criteria for fitting Criteria of least squares Critical ratio Critical region Critical value
Asymmetric distribution Asymptotic bias Asymptotic efficiency Asymptotic variance Attributable risk Attribute data Attribution Autocorrelation Autocorrelation of residuals Average Average confidence interval length Average growth rate BBB Bar chart Bar graph Base period Bayes' theorem Bell-shaped curve Bernoulli distribution Best-trim estimator Bias Binary logistic regression Binomial distribution Bisquare Bivariate Correlate Bivariate normal distribution Bivariate normal population Biweight interval Biweight M-estimator Block BMDP(Biomedical computer programs) Boxplots Breakdown bound CCC Canonical correlation Caption Case-control study Categorical variable Catenary Cauchy distribution Cause-and-effect relationship Cell Censoring
基于压缩感知的毫米波大规模MIMO信道估计
基于压缩感知的毫米波大规模MIMO信道估计作者:刘海波杜江黄天赐马腾来源:《中国新通信》2022年第08期摘要:在毫米波大规模MIMO系统中,由于毫米波的路径损耗极其严重,在空间中只有少量的可用信道存在,加上大规模天线形成的高增益窄细波束,使得波束域信道更加稀疏。
针对信道稀疏性的特点,可以与压缩感知理论很好地结合,本文分析了正交匹配追踪算法和稀疏度自适应匹配追踪算法在信道估计的优缺点,并将一种改进的稀疏度自适应匹配追踪算法应用到毫米波大规模MIMO信道估计中,可以取得较好的估计效果。
关键词:毫米波;MIMO;压缩感知;稀疏度自适应一、引言毫米波(Millimeter Wave,mmWave)的频段在30GHz~300GHz之间,频谱资源丰富,且与大规模天线结合,能够弥补毫米波自身所带来的高路损,是5G通信的关键技术之一[1]。
在毫米波大规模MIMO系统中,能否掌握信道状态信息对预编码十分重要。
只有精确估计出信道的信息状态,才能够利用大规模MIMO多天线优势提供更多自由度,从而提升信道容量[2]。
因此,信道估计至关重要。
压缩感知(Compressive Sensing,CS)理论被提出后,被广泛运用在各个领域中[3],如图像处理、语音编码和雷达监测等。
在毫米波大规模MIMO信道,毫米波路径损耗极高,只有少数的可用信道在空间中可以进行通信,大规模MIMO在空间中生成的高增益窄细波束使得信道更加稀疏。
运用毫米波信道稀疏的特点,可以将压缩感知理论很好地应用在信道估计中,将信道估计问题转化为稀疏信号重构问题,以实现低复杂度、高精度的信道估计。
在压缩感知理论中,贪婪迭代算法由于计算复杂度低的优点被广泛使用。
文献[4]利用正交匹配追踪(Orthogonal Matching Pursuit,OMP)算法估计稀疏多径信道,比传统的LS算法复杂度低,精度高。
然而,OMP算法的实现条件是以信道的稀疏度作为前提,这在实际应用中,信道的稀疏性往往是未知的,所以使用价值比较低。
数学期刊名称缩写
Abbreviations of Names of SerialsThis list gives the form of references used in Mathematical Reviews(MR).The abbreviation is followed by the complete title,the place of publication and other pertinent information.∗not previously listed E available electronically §journal reviewed cover-to-cover V videocassette series †monographic series¶bibliographic journal∗Abh.Braunschw.Wiss.Ges.Abhandlungen derBraunschweigischen Wissenschaftlichen Gesellschaft.J.Cramer Verlag,Braunschweig.(Formerly Abh.Braunschweig.Wiss.Ges.)Abh.Braunschweig.Wiss.Ges.Abhandlungen derBraunschweigischen Wissenschaftlichen Gesellschaft.Goltze,G¨o ttingen.(Continued as Abh.Braunschw.Wiss.Ges.)§Abh.Math.Sem.Univ.Hamburg Abhandlungen aus dem Mathematischen Seminar der Universit¨a t Hamburg.Vandenhoeck&Ruprecht,G¨o ttingen.ISSN0025-5858.†Abh.Math.-Naturwiss.Kl.Akad.Wiss.Lit.Mainz Abhandlungen der Mathematisch-NaturwissenschaftlichenKlasse.Akademie der Wissenschaften und der Literaturin Mainz.[Transactions of the Mathematical-ScientificSection.Academy of Sciences and Literature in Mainz]Steiner,Stuttgart.ISSN0002-2993.§Abstr.Appl.Anal.Abstract and Applied Analysis.Mancorp,Tampa,FL.ISSN1085-3375.¶Abstracts Amer.Math.Soc.Abstracts of Papers Presented to the American Mathematical Society.Amer.Math.Soc.,Providence,RI.ISSN0192-5857.Acad.Roy.Belg.Bull.Cl.Sci.(6)Acad´e mie Royale deBelgique.Bulletin de la Classe des Sciences.6e S´e rie.Acad.Roy.Belgique,Brussels.ISSN0001-4141.Acad.Roy.Belg.Cl.Sci.M´e m.Collect.8o(3)Acad´e mieRoyale de Belgique.Classe des Sciences.M´e moires.Collection in-8o.3e S´e rie.Acad.Roy.Belgique,Brussels.ISSN0365-0936.Acad.Serbe Sci.Arts Glas Acad´e mie Serbe des Scienceset des Arts.Glas.Classe des Sciences Naturelles etMath´e matiques.Srpska Akad.Nauk.i Umetnost.,Belgrade.ISSN0374-7956.†Acc`e s Sci.Acc`e s Sciences.[Access to Sciences]De Boeck Univ.,Brussels.§E ACM J.Exp.Algorithmics The ACM Journal ofExperimental Algorithmics.ACM,New York.ISSN1084-6654.E ACM Trans.Math.Software Association for ComputingMachinery.Transactions on Mathematical Software.ACM,New York.ISSN0098-3500.∗§Acta Acad.Paedagog.Agriensis Sect.Mat.(N.S.)Acta Academiae Paedagogicae Agriensis.Nova Series.SectioMatematicae.Eszterh´a zy K´a roly Coll.,Eger.∗§Acta Anal.Funct.Appl.Acta Analysis Functionalis Applicata.AAFA.Yingyong Fanhanfenxi Xuebao.SciencePress,Beijing.ISSN1009-1327.§E Acta Appl.Math.Acta Applicandae Mathematicae.An International Survey Journal on Applying Mathematics andMathematical Applications.Kluwer Acad.Publ.,Dordrecht.ISSN0167-8019.§Acta Arith.Acta Arithmetica.Polish Acad.Sci.,Warsaw.ISSN0065-1036.Acta Astronom.Sinica Acta Astronomica Sinica.TianwenXuebao.Kexue Chubanshe(Science Press),Beijing.(Translated in Chinese Astronom.Astrophys.)ISSN0001-5245.Acta Astrophys.Sinica Acta Astrophysica Sinica.TiantiWuli Xuebao.Kexue Chubanshe(Science Press),Beijing.(Translated in Chinese Astronom.Astrophys.)ISSN0253-2379.Acta Automat.Sinica Acta Automatica Sinica.ZidonghuaXuebao.Kexue Chubanshe(Science Press),Beijing.ISSN0254-4156.Acta Cienc.Indica Math.Acta Ciencia Indica.Mathematics.Pragati Prakashan,Meerut.ISSN0970-0455.Acta Cient.Venezolana Acta Cient´ıfica Venezolana.Asociaci´o n Venezolana para el Avance de la Ciencia.Asoc.Venezolana Avance Cien.,Caracas.ISSN0001-5504.Acta Comment.Univ.Tartu.Math.Acta etCommentationes Universitatis Tartuensis de Mathematica.Univ.Tartu,Fac.Math.,Tartu.ISSN1406-2283.E Acta Cryst.Sect.A Acta Crystallographica.Section A:Foundations of Crystallography.Munksgaard,Copenhagen.ISSN0108-7673.§Acta Cybernet.Acta Cybernetica.J´o zsef Attila Univ.Szeged,Szeged.ISSN0324-721X.Acta Hist.Leopold.Acta Historica Leopoldina.DeutscheAkad.Naturforscher Leopoldina,Halle an der Saale.ISSN0001-5857.§E Acta Inform.Acta Informatica.Springer,Heidelberg.ISSN0001-5903.§Acta Math.Acta Mathematica.Inst.Mittag-Leffler, Djursholm.ISSN0001-5962.§E Acta Math.Acad.Paedagog.Nyh´a zi.(N.S.)Acta Mathematica.Academiae Paedagogicae Ny´ıregyh´a ziensis.New Series.Bessenyei Gy¨o rgy Coll.,Ny´ıregyh´a za.ISSN0866-0182.§Acta Math.Appl.Sinica Acta Mathematicae Applicatae Sinica.Yingyong Shuxue Xuebao.Kexue Chubanshe(Science Press),Beijing.ISSN0254-3079.§Acta Math.Appl.Sinica(English Ser.)Acta Mathematicae Applicatae Sinica.English Series.Yingyong ShuxueXuebao.Science Press,Beijing.ISSN0168-9673.§E Acta Math.Hungar.Acta Mathematica Hungarica.Akad.Kiad´o,Budapest.ISSN0236-5294.§Acta rm.Univ.Ostraviensis Acta Mathematica et Informatica Universitatis Ostraviensis.Univ.Ostrava,Ostrava.ISSN1211-4774.§Acta Math.Sci.(Chinese)Acta Mathematica Scientia.Series A.Shuxue Wuli Xuebao.Chinese Edition.KexueChubanshe(Science Press),Beijing.(See also Acta Math.Sci.(English Ed.))ISSN1003-3998.§Acta Math.Sci.(English Ed.)Acta Mathematica Scientia.Series B.English Edition.Shuxue Wuli Xuebao.SciencePress,Beijing.(See also Acta Math.Sci.(Chinese))ISSN0252-9602.§E Acta Math.Sin.(Engl.Ser.)Acta Mathematica Sinica.English Series.Springer,Heidelberg.ISSN1000-9574.§Acta Math.Sinica Acta Mathematica Sinica.Chinese Math.Soc.,Acta Math.Sinica m.,Beijing.ISSN0583-1431.§E Acta enian.(N.S.)Acta Mathematica Universitatis Comenianae.New enius Univ.Press,Bratislava.ISSN0862-9544.§Acta Math.Vietnam.Acta Mathematica Vietnamica.Nat.Center Natur.Sci.Tech.,Hanoi.ISSN0251-4184.∗Acta Mech.Sin.Engl.Ser.Acta Mechanica Sinica.English Series.The Chinese Society of Theoretical and AppliedMechanics.Chinese J.Mech.Press,Beijing.(FormerlyActa Mech.Sinica(English Ed.))ISSN0567-7718.Acta Mech.Sinica(Beijing)Acta Mechanica Sinica.LixueXuebao.Chinese J.Mech.Press,Beijing.(See also ActaMech.Sinica(English Ed.))ISSN0459-1879.Acta Mech.Sinica(English Ed.)Acta Mechanica Sinica.English Edition.Lixue Xuebao.Kexue Chubanshe(SciencePress),Beijing.(Continued as Acta Mech.Sin.Engl.Ser.)(See also Acta Mech.Sinica(Beijing))ISSN0567-7718.∗Acta Mech.Solida Sin.Acta Mechanica Solida Sinica.Chinese Journal of Solid Mechanics.Huazhong Univ.Sci.Tech.,Wuhan.ISSN0894-9166.†Acta Numer.Acta Numerica.Cambridge Univ.Press, Cambridge.ISSN0962-4929.Acta Phys.Polon.B Jagellonian University.Institute ofPhysics and Polish Physical Society.Acta Physica PolonicaB.Jagellonian Univ.,Krak´o w.ISSN0587-4254.Acta Phys.Sinica Acta Physica Sinica.Wuli Xuebao.Chinese Phys.Soc.,Beijing.ISSN1000-3290.Acta put.Manage.Eng.Ser.Acta Polytechnica Scandinavica.Mathematics,Computingand Management in Engineering Series.Finn.Acad.Tech.,Espoo.ISSN1238-9803.§Acta Sci.Math.(Szeged)Acta Universitatis Szegediensis.Acta Scientiarum Mathematicarum.Univ.Szeged,Szeged.ISSN0001-6969.Acta Sci.Natur.Univ.Jilin.Acta Scientiarum NaturaliumUniversitatis Jilinensis.Jilin Daxue.Ziran Kexue Xuebao.Jilin University.Natural Sciences Journal.Jilin Univ.Nat.Sci.J.,Editor.Dept.,Changchun.ISSN0529-0279.Acta Sci.Natur.Univ.Norm.Hunan.Acta ScientiarumNaturalium Universitatis Normalis Hunanensis.HunanShifan Daxue Ziran Kexue Xuebao.J.Hunan Norm.Univ.,Editor.Dept.,Changsha.ISSN1000-2537.Acta Sci.Natur.Univ.Pekinensis See Beijing DaxueXuebao Ziran Kexue BanActa Sci.Natur.Univ.Sunyatseni Acta ScientiarumNaturalium Universitatis Sunyatseni.Zhongshan DaxueXuebao.Ziran Kexue Ban.Journal of Sun Yatsen University.Natural Sciences.J.Zhongshan Univ.,Editor.Dept.,Guangzhou.ISSN0529-6579.Acta Tech.CSA V Acta Technica CSA V.Acad.Sci.CzechRepub.,Prague.ISSN0001-7043.§Acta Univ.Carolin.Math.Phys.Acta Universitatis Carolinae.Mathematica et Physica.Karolinum,Prague.ISSN0001-7140.§Acta Univ.Lodz.Folia Math.Acta UniversitatisLodziensis.Folia Mathematica.Wydawn.Uniw.Ł´o dzkiego,Ł´o d´z.ISSN0208-6204.Acta Univ.Lodz.Folia Philos.Acta UniversitatisLodziensis.Folia Philosophica.Wydawn.Uniw.Ł´o dzkiego,Ł´o d´z.ISSN0208-6107.∗§Acta Univ.M.Belii Ser.Math.Acta Universitatis Matthiae Belii.Natural Science Series.Series Mathematics.MatejBel Univ.,Bansk´a Bystrica.(Formerly Acta Univ.MathaeiBelii Nat.Sci.Ser.Ser.Math.)§Acta Univ.Mathaei Belii Nat.Sci.Ser.Ser.Math.Matej Bel University.Acta.Natural Science Series.Series Mathematics.Matej Bel Univ.,Bansk´a Bystrica.(Continued as Acta Univ.M.Belii Ser.Math.)Acta Univ.Oulu.Ser.A Sci.Rerum Natur.ActaUniversitatis Ouluensis.Series A.Scientiae RerumNaturalium.Univ.Oulu,Oulu.ISSN0355-3191.§Acta Univ.Palack.Olomuc.Fac.Rerum Natur.Math.Acta Universitatis Palackianae Olomucensis.Facultas Rerum Naturalium.Mathematica.ISSN0231-9721.†Acta Univ.Ups.Stud.Philos.Ups.Acta Universitatis Upsaliensis.Studia Philosophica Upsaliensia.Uppsala Univ., Uppsala.ISSN0585-5497.†Acta Univ.Upsaliensis Skr.Uppsala Univ.C Organ.Hist.Acta Universitatis Upsaliensis.Skrifter r¨o randeUppsala anisation och Historia.[ActaUniversitatis Upsaliensis.Publications concerning Uppsalaanization and History]Uppsala Univ.,Uppsala.ISSN0502-7454.†Actualit´e s Math.Actualit´e s Math´e matiques.[Current Mathematical Topics]Hermann,Paris.†Actualit´e s Sci.Indust.Actualit´e s Scientifiques etIndustrielles.[Current Scientific and Industrial Topics]Hermann,Paris.†Adapt.Learn.Syst.Signal mun.Control Adaptive and Learning Systems for Signal Processing,Communications,and Control.Wiley,New York.Adv.Appl.Clifford Algebras Advances in Applied Clifford Algebras.Univ.Nac.Aut´o noma M´e xico,M´e xico.ISSN0188-7009.†Adv.Appl.Mech.Advances in Applied Mechanics.Academic Press,Boston,MA.ISSN0065-2165.∗†Adv.Astron.Astrophys.Advances in Astronomy andAstrophysics.Gordon and Breach,Amsterdam.ISSN1025-8206.†Adv.Book Class.Advanced Book Classics.Perseus, Reading,MA.†Adv.Bound.Elem.Ser.Advances in Boundary Elements put.Mech.,Southampton.(Continued as Int.Ser.Adv.Bound.Elem.)ISSN1368-258X.†Adv.Chem.Phys.Advances in Chemical Physics.Wiley, New York.†put.Econom.Advances in Computational Economics.Kluwer Acad.Publ.,Dordrecht.§E put.Math.Advances in ComputationalMathematics.Baltzer,Bussum.ISSN1019-7168.†put.Sci.Advances in Computing Science.Springer,Vienna.ISSN1433-0113.∗†Adv.Des.Control Advances in Design and Control.SIAM, Philadelphia,PA.§Adv.Differential Equations Advances in Differential Equations.Khayyam,Athens,OH.ISSN1079-9389.†Adv.Discrete Math.Appl.Advances in DiscreteMathematics and Applications.Gordon and Breach,Amsterdam.ISSN1028-3129.†Adv.Fluid Mech.Advances in Fluid put.Mech.,Southampton.ISSN1353-808X.†Adv.Fuzzy Systems Appl.Theory Advances in Fuzzy Systems—Applications and Theory.World Sci.Publishing,River Edge,NJ.§E Adv.in Appl.Math.Advances in Applied Mathematics.Academic Press,Orlando,FL.ISSN0196-8858.§Adv.in Appl.Probab.Advances in Applied Probability.Appl.Probab.Trust,Sheffield.ISSN0001-8678.∗†Adv.Ind.Control Advances in Industrial Control.Springer, London.†Adv.Lectures Math.Advanced Lectures in Mathematics.Vieweg,Braunschweig.ISSN0932-7134.§E Adv.Math.Advances in Mathematics.Academic Press, Orlando,FL.ISSN0001-8708.§Adv.Math.(China)Advances in Mathematics(China).Shuxue Jinzhan.Peking Univ.Press,Beijing.ISSN1000-0917.†Adv.Math.Econ.Advances in Mathematical Economics.Springer,Tokyo.†Adv.Math.Sci.Advances in the Mathematical Sciences.Amer.Math.Soc.,Providence,RI.§Adv.Math.Sci.Appl.Advances in Mathematical Sciences and Applications.An International Journal.Gakk¯o tosho,Tokyo.ISSN1343-4373.§Adv.Nonlinear Var.Inequal.Advances in Nonlinear Variational Inequalities.An International Journal.Internat.Publ.,Orlando,FL.ISSN1092-910X.†Adv.Numer.Math.Advances in Numerical Mathematics.Teubner,Stuttgart.†Adv.Partial Differ.Equ.Advances in Partial Differential Equations.Wiley-VCH,Berlin.†Adv.Partial Differential Equations Advances in Partial Differential Equations.Akademie Verlag,Berlin.†Adv.Ser.Dynam.Systems Advanced Series in Dynamical Systems.World Sci.Publishing,River Edge,NJ.†Adv.Ser.Math.Phys.Advanced Series in Mathematical Physics.World Sci.Publishing,River Edge,NJ.†Adv.Ser.Math.Sci.Eng.Advanced Series in Mathematical Science and Engineering.World Fed.Publ.,Atlanta,GA.†Adv.Ser.Neurosci.Advanced Series in Neuroscience.World Sci.Publishing,River Edge,NJ.†Adv.Ser.Nonlinear Dynam.Advanced Series in Nonlinear Dynamics.World Sci.Publishing,River Edge,NJ.†Adv.Ser.Stat.Sci.Appl.Probab.Advanced Series on Statistical Science&Applied Probability.World Sci.Publishing,River Edge,NJ.†Adv.Ser.Theoret.Phys.Sci.Advanced Series onTheoretical Physical Science.World Sci.Publishing,RiverEdge,NJ.∗†Adv.Soft Comput.Advances in Soft Computing.Physica, Heidelberg.†Adv.Spat.Sci.Advances in Spatial Science.Springer, Berlin.†Adv.Stud.Contemp.Math.Advanced Studies inContemporary Mathematics.Gordon and Breach,New York.∗§Adv.Stud.Contemp.Math.(Pusan)Advanced Studies in Contemporary Mathematics(Pusan).Adv.Stud.Contemp.Math.,m.,Saga.ISSN1229-3067.†Adv.Stud.Pure Math.Advanced Studies in PureMathematics.Kinokuniya,Tokyo.†Adv.Textb.Control Signal Process.Advanced Textbooks in Control and Signal Processing.Springer,London.∗†Adv.Texts Phys.Advanced Texts in Physics.Springer,Berlin.ISSN1439-2674.§E Adv.Theor.Math.Phys.Advances in Theoretical and Mathematical Physics.Internat.Press,Cambridge,MA.ISSN1095-0761.†Adv.Theory put.Math.Advances in the Theory of Computation and Computational Mathematics.Nova Sci.Publ.,Commack,NY.†Adv.Top.Math.Advanced Topics in Mathematics.PWN, Warsaw.§E Aequationes Math.Aequationes Mathematicae.Birkh¨a user,Basel.ISSN0001-9054.§Afrika Mat.(3)Afrika Matematika.Journal of the African Mathematical Union.Journal de l’Union Math´e matiqueAfricaine.S´e rie3.Union Math.Africaine,Caluire.†Agr´e g.Math.Agr´e gation de Math´e matiques.Masson,Paris.AI Commun.AI Communications.The European Journal onArtificial Intelligence.IOS,Amsterdam.ISSN0921-7126.†AIAA Ed.Ser.AIAA Education Series.AIAA,Washington, DC.†AIP Conf.Proc.AIP Conference Proceedings.Amer.Inst.Phys.,New York.ISSN0094-243X.†AIP Ser.Modern Acoust.Signal Process.AIP Series in Modern Acoustics and Signal Processing.Amer.Inst.Phys.,New York.†AKP Class.AKP Classics.A K Peters,Wellesley,MA.∗†Al-Furq¯a n Islam.Herit.Found.Publ.Al-Furq¯a n Islamic Heritage Foundation Publication.Al-Furq¯a n Islam.Herit.Found.,London.†Albion Math.Appl.Ser.Albion Mathematics&Applications Series.Albion,Chichester.§E Algebr.Represent.Theory Algebras and Representation Theory.Kluwer Acad.Publ.,Dordrecht.ISSN1386-923X.Algebra and Logic Algebra and Logic.Consultants Bureau,New York.(Translation of Algebra Log.and Algebra iLogika)ISSN0002-5232.†Algebra Ber.Algebra Berichte.[Algebra Reports]Fischer, Munich.ISSN0942-1270.§E Algebra Colloq.Algebra Colloquium.Springer,Singapore.ISSN1005-3867.§Algebra i Analiz Rossi˘ıskaya Akademiya Nauk.Algebrai Analiz.“Nauka”S.-Peterburg.Otdel.,St.Petersburg.(Translated in St.Petersburg Math.J.)ISSN0234-0852.§Algebra i Logika Sibirski˘ıFond Algebry i Logiki.Algebrai Logika.Izdat.NII Mat.-Inform.Osnov Obuch.NGU,Novosibirsk.(Continued as Algebra Log.)(Translatedin Algebra and Logic)ISSN0373-9252.∗§Algebra Log.Algebra i Logika.Institut Diskretno˘ıMatematiki i Informatiki.Sib.Fond Algebry Log.,Novosibirsk.(Formerly Algebra i Logika)(Translatedin Algebra and Logic)ISSN0373-9252.†Algebra Logic Appl.Algebra,Logic and Applications.Gordon and Breach,Amsterdam.ISSN1041-5394.§E Algebra Universalis Algebra Universalis.Univ.Manitoba, Winnipeg,MB.ISSN0002-5240.§Algebras Groups Geom.Algebras,Groups and Geometries.Hadronic Press,Palm Harbor,FL.ISSN0741-9937.§E Algorithmica Algorithmica.An International Journal in Computer Science.Springer,New York.ISSN0178-4617.†Algorithms Combin.Algorithms and Combinatorics.Springer,Berlin.ISSN0937-5511.†Algorithms Comput.Math.Algorithms and Computation in Mathematics.Springer,Berlin.ISSN1431-1550.§Aligarh Bull.Math.The Aligarh Bulletin of Mathematics.Aligarh Muslim Univ.,Aligarh.Aligarh J.Statist.The Aligarh Journal of Statistics.AligarhMuslim Univ.,Aligarh.ISSN0971-0388.§pok Alkalmazott Matematikai Lapok.Magyar Tudom´a nyos Akad.,Budapest.ISSN0133-3399.∗Allg.Stat.Arch.Allgemeines Statistisches Archiv.AStA.Journal of the German Statistical Society.Physica,Heidelberg.ISSN0002-6018.†´Alxebra´Alxebra.[Algebra]Univ.Santiago deCompostela,Santiago de Compostela.†Am.Univ.Stud.Ser.IX Hist.American University Studies.Series IX:ng,New York.ISSN0740-0462.∗§E AMA Algebra Montp.Announc.AMA.AlgebraMontpellier Announcements.AMA Algebra Montp.Announc.,Montpellier.§E Amer.J.Math.American Journal of Mathematics.Johns Hopkins Univ.Press,Baltimore,MD.ISSN0002-9327.Amer.J.Math.Management Sci.American Journal ofMathematical and Management Sciences.Amer.Sci.Press,Syracuse,NY.ISSN0196-6324.E Amer.J.Phys.American Journal of Physics.Amer.Assoc.Phys.Teach.,College Park,MD.ISSN0002-9505.Amer.Math.Monthly The American MathematicalMonthly.Math.Assoc.America,Washington,DC.ISSN0002-9890.†Amer.Math.Soc.Colloq.Publ.American Mathematical Society Colloquium Publications.Amer.Math.Soc.,Providence,RI.ISSN0065-9258.†Amer.Math.Soc.Transl.Ser.2American Mathematical Society Translations,Series2.Amer.Math.Soc.,Providence,RI.(Selected translations of Russian language publications Tr.St.-Peterbg.Mat.Obshch.)ISSN0065-9290.E Amer.Statist.The American Statistician.Amer.Statist.Assoc.,Alexandria,V A.ISSN0003-1305.†Amer.Univ.Stud.Ser.V Philos.American University Studies.Series V:ng,New York.ISSN0739-6392.†AMS Progr.Math.Lecture Ser.AMS Progress in Mathematics Lecture Series.Amer.Math.Soc.,Providence,RI.†AMS Short Course Lecture Notes AMS Short Course Lecture Notes.Amer.Math.Soc.,Providence,RI.†AMS-MAA Joint Lecture Ser.AMS-MAA Joint Lecture Series.Amer.Math.Soc.,Providence,RI.†AMS/IP Stud.Adv.Math.AMS/IP Studies in Advanced Mathematics.Amer.Math.Soc.,Providence,RI.ISSN1089-3288.E An.Acad.Brasil.Ciˆe nc.Anais da Academia Brasileira deCiˆe ncias.Acad.Brasil.Ciˆe nc.,Rio de Janeiro.ISSN0001-3765.†An.F´ıs.Monogr.Anales de F´ısica.Monograf´ıas.[Annals of Physics.Monographs]CIEMAT,Madrid.§An.S¸tiint¸.Univ.Al.I.Cuza Ias¸i Inform.(N.S.)Analele S¸tiint¸ifice ale Universit˘a t¸ii“Al.I.Cuza”din Ias¸i.Informatic˘a.Serie Nou˘a.Ed.Univ.“Al.I.Cuza”,Ias¸i.ISSN1224-2268.§An.S¸tiint¸.Univ.Al.I.Cuza Ias¸i.Mat.(N.S.)Analele S¸tiint¸ifice ale Universit˘a tii“Al.I.Cuza”din Ias¸i.SerieNou˘a.Matematic˘a.Univ.Al.I.Cuza,Ias¸i.ISSN1221-8421.§An.S¸tiint¸.Univ.Ovidius Constant¸a Ser.Mat.Universit˘a t¸ii “Ovidius”Constant¸a.Analele S¸tiint¸ifice.Seria Matematic˘a.“Ovidius”Univ.Press,Constant¸a.ISSN1223-723X.§An.Univ.Bucures¸ti Mat.Analele Universit˘a t¸ii Bucures¸ti.Matematic˘a.Univ.Bucharest,Bucharest.ISSN1013-4123.An.Univ.Craiova rm.Analele Universitˇa t¸iidin Craiova.Seria Matematic˘a-Informatic˘a.Univ.Craiova,Craiova.ISSN1223-6934.∗§An.Univ.Oradea Fasc.Mat.Analele Universit˘a t¸ii din Oradea.Fascicola Matematica.Univ.Oradea,Oradea.ISSN1221-1265.§An.Univ.Timis¸oara Ser.Mat.-Inform.Universit˘a t¸ii din Timis¸oara.Analele.Seria Matematic˘a-Informatic˘a.Univ.Timis¸oara,Timis¸oara.ISSN1224-970X.An.Univ.Timis¸oara Ser.S¸tiint¸.Fiz.Analele Universit˘a t¸iidin Timis¸oara.Seria S¸tiint¸e Fizice.Univ.Vest Timis¸oara,Timis¸oara.†Anal.Appl.Analysis and its Applications.IOS,Amsterdam.ISSN1345-4240.§E Anal.Math.Analysis Mathematica.Akad.Kiad´o,Budapest.ISSN0133-3852.†Anal.Methods Spec.Funct.Analytical Methods and Special Functions.Gordon and Breach,Amsterdam.ISSN1027-0264.†Anal.Modern.Apl.Analiz˘a Modern˘a s¸i Aplicat¸ii.[Modern Analysis and Applications]Ed.Acad.Romˆa ne,Bucharest.§Analysis(Munich)Analysis.International Mathematical Journal of Analysis and its Applications.Oldenbourg,Munich.ISSN0174-4747.E Analysis(Oxford)Analysis.Blackwell,Oxford.ISSN0003-2638.†Angew.Statist.¨Okonom.Angewandte Statistik und¨Okonometrie.[Applied Statistics and Econometrics]Vandenhoeck&Ruprecht,G¨o ttingen.§E Ann.Acad.Sci.Fenn.Math.Annales AcademiæScientiarium Fennicæ.Mathematica.Acad.Sci.Fennica,Helsinki.ISSN1239-629X.§Ann.Acad.Sci.Fenn.Math.Diss.AcademiæScientiarum Fennicæ.Annales.Mathematica.Dissertationes.Acad.Sci.Fennica,Helsinki.ISSN1239-6303.§E Ann.Appl.Probab.The Annals of Applied Probability.Inst.Math.Statist.,Hayward,CA.ISSN1050-5164.§E b.Annals of Combinatorics.Springer,Singapore.(Continued as b.)ISSN0218-0006.∗§E b.Annals of Combinatorics.Birkh¨a user,Basel.(Formerly b.)ISSN0218-0006.§Ann.Differential Equations Annals of DifferentialEquations.Weifen Fangcheng Niankan.Fuzhou Univ.,Fuzhou.ISSN1002-0942.†Ann.Discrete Math.Annals of Discrete Mathematics.North-Holland,Amsterdam.Ann.´Econom.Statist.Annales d’´Economie et deStatistique.Inst.Nat.Statist.´Etud.´Econom.,Amiens.ISSN0769-489X.§Ann.Fac.Sci.Toulouse Math.(6)Annales de la Facult´e des Sciences de Toulouse.Math´e matiques.S´e rie6.Univ.Paul Sabatier,Toulouse.ISSN0240-2963.†Ann.Fac.Sci.Univ.Kinshasa Annales de la Facult´e des Sciences.Universit´e de Kinshasa.[Annals of the Facultyof Science.University of Kinshasa]Presses Univ.Kinshasa,Kinshasa.Ann.Fond.Louis de Broglie Fondation Louis de Broglie.Annales.Fond.Louis de Broglie,Paris.ISSN0182-4295.§E Ann.Global Anal.Geom.Annals of Global Analysis and Geometry.Kluwer Acad.Publ.,Dordrecht.ISSN0232-704X.∗E Ann.Henri Poincar´e Annales Henri Poincar´e.A Journal of Theoretical and Mathematical Physics.Birkh¨a user,Basel.(Merged from Ann.Inst.H.Poincar´e Phys.Th´e or.and Helv.Phys.Acta)ISSN1424-0637.∗Ann.I.S.U.P.Annales de l’I.S.U.P..Univ.Paris,Inst.Stat., Paris.§E Ann.Inst.Fourier(Grenoble)Universit´e de Grenoble.Annales de l’Institut Fourier.Univ.Grenoble I,Saint-Martin-d’H`e res.ISSN0373-0956.§E Ann.Inst.H.Poincar´e Anal.Non Lin´e aire Annales de l’Institut Henri Poincar´e.Analyse Non Lin´e aire.Gauthier-Villars,´Ed.Sci.M´e d.Elsevier,Paris.ISSN0294-1449.Ann.Inst.H.Poincar´e Phys.Th´e or.Annales de l’InstitutHenri Poincar´e.Physique Th´e orique.Gauthier-Villars,´Ed.Sci.M´e d.Elsevier,Paris.(Merged into Ann.HenriPoincar´e)ISSN0246-0211.§E Ann.Inst.H.Poincar´e Probab.Statist.Annales de l’Institut Henri Poincar´e.Probabilit´e s et Statistiques.Gauthier-Villars,´Ed.Sci.M´e d.Elsevier,Paris.ISSN0246-0203.E Ann.Inst.Statist.Math.Annals of the Institute ofStatistical Mathematics.Kluwer Acad.Publ.,Norwell,MA.ISSN0020-3157.†Ann.Internat.Soc.Dynam.Games Annals of the International Society of Dynamic Games.Birkh¨a userBoston,Boston,MA.†Ann.Israel Phys.Soc.Annals of the Israel Physical Society.IOP,Bristol.ISSN0309-8710.Ann.Japan Assoc.Philos.Sci.Annals of the JapanAssociation for Philosophy of Science.Japan Assoc.Philos.Sci.,Tokyo.ISSN0453-0691.§Ann.Mat.Pura Appl.(4)Annali di Matematica Pura ed Applicata.Serie Quarta.Zanichelli,Bologna.ISSN0003-4622.E Ann.Math.Artificial Intelligence Annals of Mathematicsand Artificial Intelligence.Baltzer,Bussum.ISSN1012-2443.§Ann.Math.Blaise Pascal Annales Math´e matiques Blaise Pascal.Univ.Blaise Pascal,Lab.Math.Pures Appl.,Aubi`e re.ISSN1259-1734.§Ann.Math.Sil.Annales Mathematicae Silesianae.Wydawn.Uniw.´Sl‘askiego,Katowice.(See also Pr.Nauk.Uniw.´Sl.Katow.)ISSN0860-2107.†Ann.New York Acad.Sci.Annals of the New York Academy of Sciences.New York Acad.Sci.,New York.ISSN0077-8923.§E Ann.of Math.(2)Annals of Mathematics.Second Series.Princeton Univ.Press,Princeton,NJ.ISSN0003-486X.†Ann.of Math.Stud.Annals of Mathematics Studies.Princeton Univ.Press,Princeton,NJ.E Ann.of Sci.Annals of Science.Taylor&Francis,London.ISSN0003-3790.E Ann.Oper.Res.Annals of Operations Research.Baltzer,Bussum.ISSN0254-5330.E Ann.Phys.(8)Annalen der Physik(8).Wiley-VCH,Berlin.ISSN0003-3804.E Ann.Physics Annals of Physics.Academic Press,Orlando,FL.ISSN0003-4916.§Ann.Polon.Math.Annales Polonici Mathematici.Polish Acad.Sci.,Warsaw.ISSN0066-2216.§Ann.Probab.The Annals of Probability.Inst.Math.Statist., Bethesda,MD.ISSN0091-1798.§E Ann.Pure Appl.Logic Annals of Pure and Applied Logic.North-Holland,Amsterdam.ISSN0168-0072.§E Ann.Sci.´Ecole Norm.Sup.(4)Annales Scientifiques de l’´Ecole Normale Sup´e rieure.Quatri`e me S´e rie.Gauthier-Villars,´Ed.Sci.M´e d.Elsevier,Paris.ISSN0012-9593.§Ann.Sci.Math.Qu´e bec Annales des SciencesMath´e matiques du Qu´e bec.Groupe Cherch.Sci.Math.,Montreal,QC.ISSN0707-9109.§Ann.Scuola Norm.Sup.Pisa Cl.Sci.(4)Annali della Scuola Normale Superiore di Pisa.Classe di Scienze.SerieIV.Scuola Norm.Sup.,Pisa.§Ann.Statist.The Annals of Statistics.Inst.Math.Statist., Hayward,CA.ISSN0090-5364.§Ann.Univ.Ferrara Sez.VII(N.S.)Annali dell’Universit`a di Ferrara.Nuova Serie.Sezione VII.Scienze Matematiche.Univ.Ferrara,Ferrara.§Ann.Univ.Mariae Curie-Skłodowska Sect.A Annales Universitatis Mariae Curie-Skłodowska.Sectio A.Mathematica.Uniw.Marii Curie-Skłodowskiej,Lublin.ISSN0365-1029.Ann.Univ.Sarav.Ser.Math.Annales UniversitatisSaraviensis.Series Mathematicae.Univ.Saarlandes,Saarbr¨u cken.ISSN0933-8268.§Ann.Univ.Sci.Budapest.E¨o tv¨o s Sect.Math.Annales Universitatis Scientiarum Budapestinensis de RolandoE¨o tv¨o s Nominatae.Sectio Mathematica.E¨o tv¨o s Lor´a ndUniv.,Budapest.ISSN0524-9007.§put.AnnalesUniversitatis Scientiarum Budapestinensis de RolandoE¨o tv¨o s Nominatae.Sectio Computatorica.E¨o tv¨o s Lor´a ndUniv.,Budapest.ISSN0138-9491.†Annu.Rev.Fluid Mech.Annual Review of FluidMechanics.Annual Reviews,Palo Alto,CA.ISSN0066-4189.Annuaire Univ.Sofia rm.Godishnik naSofi˘ıskiya Universitet“Sv.Kliment Okhridski”.Fakultetpo Matematika i Informatika.Annuaire de l’Universit´e deSofia“St.Kliment Ohridski”.Facult´e de Math´e matiques etInformatique.Presses Univ.“St.Kliment Ohridski”,Sofia.ISSN0205-0808.Annuaire Univ.Sofia Fac.Phys.Godishnik na Sofi˘ıskiyaUniversitet“Sv.Kliment Okhridski”.Fizicheski Fakultet.Annuaire de l’Universit´e de Sofia“St.Kliment Ohridski”.Facult´e de Physique.Presses Univ.“St.Kliment Ohridski”,Sofia.ISSN0584-0279.†Anu.Filol.Univ.Barc.Anuari de Filologia(Universitat de Barcelona).[Philology Yearbook(University ofBarcelona)]Univ.Barcelona,Barcelona.†Anwend.orientier.Stat.Anwendungsorientierte Statistik.[Applications-Oriented Statistics]Lang,Frankfurt am Main.ISSN1431-7982.Anz.¨Osterreich.Akad.Wiss.Math.-Natur.Kl.¨Osterreichische Akademie der Wissenschaften.Mathematisch-Naturwissenschaftliche Klasse.Anzeiger.¨Osterreich.Akad.Wissensch.,Vienna.∗§E ANZIAM J.The ANZIAM Journal.The Australian& New Zealand Industrial and Applied Mathematics Journal.Austral.Math.Soc.,Canberra.(Formerly J.Austral.Math.Soc.Ser.B)ISSN0334-2700.†Aportaciones Mat.Aportaciones Matem´a ticas.[Mathematical Contributions]Soc.Mat.Mexicana,M´e xico.†Aportaciones un.Aportaciones Matem´a ticas: Comunicaciones.[Mathematical Contributions:Communications]Soc.Mat.Mexicana,M´e xico.∗†Aportaciones Mat.Invest.Aportaciones Matem´a ticas: Investigaci´o n.[Mathematical Contributions:Research]Soc.Mat.Mexicana,M´e xico.(Formerly Aportaciones Mat.Notas Investigaci´o n)†Aportaciones Mat.Notas Investigaci´o n Aportaciones Matem´a ticas:Notas de Investigaci´o n.[MathematicalContributions:Research Notes]Soc.Mat.Mexicana,M´e xico.(Continued as Aportaciones Mat.Invest.)†Aportaciones Mat.Textos Aportaciones Matem´a ticas: Textos.[Mathematical Contributions:Texts]Soc.Mat.Mexicana,M´e xico.§E Appl.Algebra put.Applicable Algebra in Engineering,Communication and Computing.Springer,Heidelberg.ISSN0938-1279.§E Appl.Anal.Applicable Analysis.An International Journal.Gordon and Breach,Yverdon.ISSN0003-6811.§E Appl.Categ.Structures Applied Categorical Structures.A Journal Devoted to Applications of Categorical Methods inAlgebra,Analysis,Order,Topology and Computer Science.Kluwer Acad.Publ.,Dordrecht.ISSN0927-2852.†put.Control Signals Circuits Applied and Computational Control,Signals,and Circuits.Birkh¨a userBoston,Boston,MA.ISSN1522-8363.§E put.Harmon.Anal.Applied and Computational Harmonic Analysis.Time-Frequency and Time-Scale Analysis,Wavelets,Numerical Algorithms,andApplications.Academic Press,Orlando,FL.ISSN1063-5203.†Appl.Log.Ser.Applied Logic Series.Kluwer Acad.Publ., Dordrecht.†Appl.Math.Applications of Mathematics.Springer,New York.ISSN0172-4568.§Appl.Math.Applications of Mathematics.Acad.Sci.Czech Repub.,Prague.ISSN0862-7940.∗†Appl.Math.Applied Mathematics.Chapman&Hall/CRC, Boca Raton,FL.(Formerly put.)§E put.Applied Mathematics andComputation.North-Holland,New York.ISSN0096-3003.†Appl.Math.Engrg.Sci.Texts Applied Mathematics and Engineering Science Texts.CRC,Boca Raton,FL.∗rm.Applied Mathematics and Informatics.Tbilisi Univ.Press,Tbilisi.ISSN1512-0074.∗§Appl.Math.J.Chinese Univ.Ser.A Applied Mathematics.A Journal of Chinese Universities.Series A.Appl.Math.J.Chinese Univ.,m.,Hangzhou.(FormerlyGaoxiao Yingyong Shuxue Xuebao Ser.A)(See also Appl.Math.J.Chinese Univ.Ser.B)ISSN1000-4424.§Appl.Math.J.Chinese Univ.Ser.B Applied Mathematics.A Journal of Chinese Universities.Ser.B.Appl.Math.J.Chinese Univ.,m.,Hangzhou.(See also Appl.Math.J.Chinese Univ.Ser.A and Gaoxiao Yingyong Shuxue。
稀疏表示
( D) 2 || ||0
上面的符号表示:最小的线性相关的列向量所含的向量个 数。那么对于0范数优化问题就会有一个唯一的解。可即便是 证明了唯一性,求解这个问题仍然是NP-Hard。
时间继续来到2006年,华裔的数学家Terrence Tao出现, Tao和Donoho的弟子Candes合作证明了在RIP条件下,0范 数优化问题与以下1范数优化问题具有相同的解:
谢谢!
α=(0,0,0.75)
α=(0,0.24,0.75)
α=(0,0.24,0.75)
α=(0,0.24,0.65)
对于上面求内积找最匹配原子的一步,当时鉴于原 子个数太多,就想了可否在这里做个优化,就用了PSO (粒子群优化算法)查找最优原子,这个比遗传算法要 简单,我觉得这个算法也还挺有意思的。 基于学习的方法:
输入的刺激即照片不一样,则响应神经元也不一样
模拟人类视觉系统的感知机制来形成对于图像的稀疏表 示,将字典中的每个原子看作一个神经元,整个字典则对应 人类视觉皮层中神经元整体,并且字典中原子具有类似视觉 皮层中神bor函数作为简单细胞的感受野 函数,刻画其响应特性。
2 2 2 x k y x g K ( ) exp( ) cos(2 ) 2 2
( x x0 ) cos ( y y0 ) sin x
( x x0 ) sin ( y y0 ) cos y
Gabor函数
稀疏表示的应用 图像恢复,又左侧图像恢复出右侧结果
图像修补,左侧图像修补得到右侧结果
图像去模糊左上为输入模糊图像,右下为输出清晰图像, 中间均为迭代过程
物体检测
自行车,左侧输入图像,中间为位置概率图,右侧为检测结果
IDEMIA Augmented Vision Platform 产品说明书
Augmented Vision PlatformVideo analytics to ensure a safer environmentColor filterSilhouettesVehicle classificationPartial biometricsLicense plate recognitionDetect and identify a person ofinterestDetect and warn of an intrusion Detectsuspiciousobjects andalertDetect andrecognize a personand identify clothesusing a color filterSecureaccess forauthorizedpersonnelSpeed uppost-eventinvestigations3Instant facial recognition forfrictionless access controlThe solution identifies end-usersvia a standard IP camera. The identification result is pushed to the access co ntro l so ftware toauthorize access.Post-event video analyticsIDEMIA’s video analytics to ls and algo rithms help investigato rs find and qualify inf o rmati o n fro m video so urces faster using automated technology.Video analysis at the point of captureThis video pro cessing so lutio n is built o n the wo rld’s leading embedded Artificial Intelligence Edge computing platform.Live video analyticsThis s o luti o n m o nit o rs CCTV cameras and alerts security staff if necessary. The identificatio n o f perso ns o f interest is based o n bio metric and n o n-bi o metric attributes (face, silho uettes, vehicles and objects of interest).Today there is an immense volume of videos and images due to the prevalence of CCTV streams, smartphones and more. Augmented Vision can protect areas of interest live.It can be used to prevent security incidents in highly frequented public or private spaces but also to ensure frictionless access to restricted a rea s for a uthorized personnel, while protecting citizens’ privacy with the highest level of data protection.Augmented Vision analyzes hours and days of video streams and thousands of photos to help find subjects and objects of interest in post-event investigations. Augmented Vision is designed to make sense of all available video and image data, allowing operators to quickly create leads to solve crimes.Augmented Vision boosts results, saves time and increases securityOurcustomersGovernment/Public sectorAirports and public transporationCorporate & CommercialStadiums and arenasGaming and entertainment4When investigating a suspect you need to leverage all sources, including processing and analyzing a large variety and quantity of video and image formats.O nce Augmented Vision has been fed the video and image data, it recognizes, records, and classifies persons and objects of interest such as silhouettes, faces, vehicles and license plates. This automatically improves suspect traceability and enhances associations that were previously unknown.Boostingthe efficiency of operators using videoanalytics5Solve a case fasterWith Augmented Vision, you do not have to spend endless hours watching video footage or looking at images. This cutting-edge tool identifies normal background noise in a scene and detects, extracts and classifies pedestrians, faces, vehicles and license plates, identifying potential clues faster and more effectively.As a result, you maximize resources to obtain actionable intelligence faster.Allocate your resources betterPreviously, it would have taken multiple people and weeks of work to view 500 hours of video footage to study half a million faces. Augmented Vision can complete this in almost a day on a small workstation system with one single officer.6Transforming video protection into actionable intelligenceIn highly frequented public or private spaces, the use of CCTV cameras can help spot potential threats, preventing situations from escalating and ultimately keeping people safe. However, it is not always easy to understand the complexity of many CCTV streams simultaneously collected over many hours. Determining your key persons of interest (POI) is important, but how do you survey everything at the same time to make sense of momentary events?Augmented Vision is the answer. It performs live monitoring across thousands of CCTV cameras, differentiating between individuals based on biometric and non-biometric features to identify potential POI and automatically alert security staff upon a match. This allows officers more time to follow up on actionable intelligence, rather than reviewing endless video streams.Identifying high-risk andvulnerable individualsAlerting officers about targeted events of interestFollowing POI acrossmultiple cameraIdentifying license pla tes, fa ces, silhouettes and moreExamples ofevents analytics:• Intrusion detection • People counting • Mask detection • License plate• Vehicle identification • Soft biometric data: gender and ageNO MASK Detected7 Providing secureand frictionless accessto authorized personnelT he Augmented Vision Platform can be easilyintegrated with any existing access controlsoftware, leveraging IP cameras to manage access facilities. Facial data is pushed either from server to server or from server to door controller, allowing Augmented Vision to fit all ecosystems.This technology can easily identify visitors and collaborators at a distance, creating a seamless biometric identif ication experience. IDEMIA’s advanced algorithms, enable in-motion recognition while ensuring the highest accuracy. Using facial recognition means that no direct contact is needed with the equipment, a hygienic alternative to other systems available on the market.The strength of the solution resides in its ability to analyze the entire situation around access points. It is capable of enabling group access and detecting suspicious behaviors. It increases security by spotting attempts at tailgating and then locking a door when an unauthorized personis identified in the field of view.8Augmented Vision comprises of a processing solution built on the NVIDIA Jetson (Embedded ARM64 Device + GPU and SoC).With this in mind, Augmented Vision gives you the flexibility of small deployments for covert investigations and for distributed solutions where infrastructure is sparse and communications are limited.With the software and hardware all in one very small package, the Edge Embedded appliance is a high-performance, low-power computing device, optimized for IDEMIA’s next generation video analytics and computer vision solutions.Augmented Vision can be used as a standalone or as a hybrid solution and is therefore compatible with all existing cameras. It can be installed rapidly for quick and easy deployment.The Edge Embedded solution supports diverse operations – from multiple cameras and high person throughput to single camera deployments and covert or isolated applications.Leading embedded Artificial IntelligenceEdge computing9 from a CCTV camera’s field of view (an image or recordedvideo), live and/or post-event. Data is then converted intomathematical models and analyzed with respect to known objecttypes e.g. faces, people, vehicles, license plates etc. and compared(where applicable) to a client’s database. The results of this analysisare presented to the operator in various dashboards.Processingservers (distributed architecture)Cloud or on premises10Designed for CCTVs Our solution is designed to augment the capabilities of standard CCTV systems Pedestrian detectionDetection and recognition ofperson and top/bottom colorfilterScalabilitySystem-wide scalability ofthe solution for enterpriseperformanceHybrid deploymentsCloud, on premises hardwareand Edge technologiesworking togetherGeo-trackingand mappingLocalization and positioningof cameras and POI acrossmapsVehicles detectionDetection, classificationand identification of multiplevehicle typesProprietary neuralnetworksIDEMIA’s neural networksare developed and trainedin-houseParallel processingSystem-wide parallelprocessing for faster analysisof dataVersatileplug-in abilityThird party low levelalgorithm plug-in for fastintegrationColor filterPOI and object filtering forcolorsEvent prioritizationAnalytics processing starts atany point in time processingoutwardObject linking Automatic and manual associations for faster case managementEmbeddedBuilt upon world leading AI Edge computing platformNVIDIA JetsonCollaboration Enables local and remote teams to work together on a single case or across multiple casesFacial recognitionNIST tested, high quality facedetection and recognitionMeta-taggingAutomatic and customizabletagging of POI data creatingsearchable TAGsEvent analyticsIntrusion detection alertsMask detection alertsPerson detectionsPeople coutingLicense platerecognitionDetection and recognitionof license plates includingpartial plates0.1%False alerts per day5 billionBiometric recordsmanaged worldwideNISTTop ranked acrossmultiple biometricsWith over 40 years’ experience and more than five billion biometric records managed worldwide, IDEMIA is the undisputed leader in biometric security systems. O ur algorithms – consistently top-ranked by NIST – and sensor technologies, combined with our end-product design and manufacturing expertise, make us the partner of choice for the most prestigious organizations.We create off-the-shelf and tailor-made solutions to meet the complex and ever-changing demands of our customers. We collaborate with the world’s leading law enforcement agencies and provide the most advanced tools and guidance available on the market. We supply companies around the globe with secure access control using biometric technology as well as solutions to facilitate their day-to-day activities, bringing more security and peace of mind.We develop products a nd solutions in a ccorda nce with the Priva cy-by-Design and Privacy-by-Default principles. We are strongly committed to helping protect and secure citizens’ privacy with the highest possible level of data protection for identity verification and authentication technologies.Our goal is to keep the world as safe and secure as possible.Why is IDEMIA the ideal partner for you?Our products AUGMENT your VISION of video data00 0 00 All rights reserved. Specifications and information subject to change without notice. The products described in this document are subject to continuous development and improvement. All trademarks and service marks referred to herein, whether registered or not in specific countries, are the property of their respective owners.。
面向巡视探测任务的复杂地形信息感知与场景重建
摘要:针对复杂地形环境下巡视探测中避障问题,提出了一种基于点云的地形信息感知与场景 建模方法。首先对获取的点云数据进行稀疏采样和滤波降噪 ;然后结合移动机器人越障能力 极限与改进的随机采样一致性算法,拟合其自适应基准面作为可通行区域;其后使用基于密度 的聚类算法感知地形特征信息,并采用凸包算法提取地形特征轮廓;最后结合自适应基准平面 进行快速三维场景重建,为地面观测提供直观快速的巡视器周围三维环境模型。通过对复杂 地形环境进行模拟实验,结果表明:该方法可以有效获取复杂地形信息,并可大幅度提高场景 重建的效率。 关键词:地形信息感知;随机采样一致性算法;基于密度的聚类算法;凸包算法;快速三维 重建 中图分类号:TP242.6+2;V557. 3 文献标识码:A 文章编号:1674-5825(2021)03-0339-11
第 27 卷 第 3 期 2021 年 6 月
载人航天 Manned Spaceflight
Vol.27 No.3 Jun. 2021
面向巡视探测任务的复杂地形信息感知与场景重建
赵 迪1,2,胡梦雅1,李世其2,纪合超2,何 宁3
(].湖北工业大学,武汉430068; 2.华中科技大学,武汉430074; 3.中国航天员科研训练中心,北京100094)
Sparse Representations and the Basis Pursuit Algorithm
Michael Elad
The CS Department The Technion – Israel Institute of technology Haifa 32000, Israel IPAM – MGA Program
• Signal Prior: in inverse problems seek a solution that has a sparse representation over a predetermined dictionary, and this way regularize the problem (just as TV, bilateral, Beltrami flow, wavelet, and other priors are used).
7
Signals‟ Origin in Sparse-Land
We shall assume that our signals of interest emerge from a random generator machine M
Random Signal Generator
x
8
M
Sparse representations for Signals – Theory and Applications
3. Success of BP/MP for Inverse Problems
Uniqueness, stability of BP and MP
4. Applications
Image separation and inpainting
基于稀疏表示的频控阵 MIMO 雷达多目标定位
基于稀疏表示的频控阵 MIMO 雷达多目标定位陈慧;邵怀宗;潘晔;王文钦【摘要】针对频控阵多输入多输出(MIMO)雷达,提出了一种基于压缩感知稀疏表示思想的目标定位算法。
首先回顾了 MIMO 雷达和频控阵的特点,进而研究了频控阵 MIMO 雷达的性质,它不但可以具有MIMO 雷达的优点,而且能够感知目标的距离维信息,同时针对频控阵 MIMO 雷达接收数据模型进行数学建模,并把目标定位问题表示成稀疏表示框架下的代价函数。
最后利用凸优化工具对代价函数进行优化求解,由所得稀疏权向量中的非零元素索引映射出目标的方位和距离信息。
与现有的经典 MUSIC 算法相比,具有更好的定位性能,计算机仿真结果证明了所提算法的有效性。
%For the frequency diverse array(FDA)multiple-input and multiple-output(MIMO)radar,a target localization algorithm in sparse signal representation perspective is presented.Firstly,the characters of the MIMO radar and FDA are reviewed,then the properties of the FDA MIMO radar are studied,that is,it not only owns the merits of the MIMO radar,but also can sense the range information of the targets.Its re-ceiving mathematical measurement is also modeled,and the target localization problem is described as a cost function under the sparse representation framework.Finally,the angle and the range of the targets are esti-mated by mapping the non-zero element indexes of the sparse vector which is obtained by solving the cost function using existing convex pared with the existing classic MUSIC algorithm,the proposed algo-rithm can achieve better localization performance.The computer simulation results demonstrate the effective-ness of the algorithm.【期刊名称】《雷达科学与技术》【年(卷),期】2015(000)003【总页数】6页(P259-264)【关键词】频控阵 MIMO 雷达;压缩感知;稀疏表示;参数估计【作者】陈慧;邵怀宗;潘晔;王文钦【作者单位】电子科技大学通信与信息工程学院,四川成都 611731;电子科技大学通信与信息工程学院,四川成都 611731;电子科技大学通信与信息工程学院,四川成都 611731;电子科技大学通信与信息工程学院,四川成都 611731【正文语种】中文【中图分类】TN9580 引言多输入多输出(Multiple-Input and Multiple-Output,MIMO)雷达近年来受到了广泛关注[1-3]。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
1
A survey of sparse representation: algorithms and applications
Zheng Zhang, Student Member, IEEE, Yong Xu, Senior Member, IEEE, Jian Yang, Member, IEEE, Xuelong Li, Fellow, IEEE, and David Zhang, Fellow, IEEE
learning, and computer vision, such as image denoising, debluring, inpainting, image restoration, super-resolution, visual tracking, image classification and image segmentation [3–10]. Sparse representation has shown huge potential capabilities in handling these problems. Sparse representation, from the viewpoint of its origin, is directly related to compressed sensing (CS) [11–13], which is one of the most popular topics in recent years. Donoho [11] first proposed the original concept of compressed sensing. CS theory suggests that if a signal is sparse or compressive, the original signal can be reconstructed by exploiting a few measured values, which are much less than the ones suggested by previously used theories such as Shannon’s sampling theorem (SST). Candes et al. [13], from the mathematical perspective, demonstrated the rationale of CS theory, i.e. the original signal could be precisely reconstructed by utilizing a small portion of Fourier transformation coefficients. Baraniuk [12] provided a concrete analysis of compressed sensing and presented a specific interpretation on some solutions of different signal reconstruction algorithms. All these literature [11–17] laid the foundation of CS theory and provided the theoretical basis for future research. Thus, a large number of algorithms based on CS theory have been proposed to address different problems in various fields. Moreover, CS theory always includes the three basic components: sparse representation, encoding measuring, and reconstructing algorithm. As an indispensable prerequisite of CS theory, the sparse representation theory [4, 7–10, 17] is the most outstanding technique used to conquer difficulties that appear in many fields. For example, the methodology of sparse representation is a novel signal sampling method for the sparse or compressible signal and has been successfully applied to signal processing [4–6]. Sparse representation has attracted much attention in recent years and many examples in different fields can be found where sparse representation is definitely beneficial and favorable [18, 19]. One example is image classification, where the basic goal is to classify the given test image into several predefined categories. It has been demonstrated that natural images can be sparsely represented from the perspective of the properties of visual neurons. The sparse representation based classification (SRC) method [20] first assumes that the test sample can be sufficiently represented by samples from the same subject. Specifically, SRC exploits the linear combination of training samples to represent the test sample and computes sparse representation coefficients of the linear representation system, and then calculates the reconstruction residuals of each class employing the sparse representation
I. I NTRODUCTION
W
ITH advancements in mathematics, linear representation methods (LRBM) have been well studied and have recently received considerable attention [1, 2]. The sparse representation method is the most representative methodology of the LRBM and has also been proven to be an extraordinary powerful solution to a wide range of application fields, especially in signal processing, image processing, machine
Zheng Zhang and Yong Xu is with the Bio-Computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055, Guangdong, P.R. China; Key Laboratory of Network Oriented Intelligent Computation, Shenzhen 518055, Guangdong, P.R. China e-mail: (yongxu@). Jian Yang is with the College of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing 210094, P. R. China. Xuelong Li is with the Center for OPTical IMagery Analysis and Learning (OPTIMAL), State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, Shaanxi, P. R. China. David Zhang is with the Biometrics Research Center, The Hong Kong Polytechnic University, Hong Kong Corresponding author: Yong Xu (email: yongxu@).
arXiv:1602.07017v1 [cs.CV] 23 Feb 2016
Abstract—Sparse representation has attracted much attention from researchers in fields of signal processing, image processing, computer vision and pattern recognition. Sparse representation also has a good reputation in both theoretical research and practical applications. Many different algorithms have been proposed for sparse representation. The main purpose of this article is to provide a comprehensive study and an updated review on sparse representation and to supply a guidance for researchers. The taxonomy of sparse representation methods can be studied from various viewpoints. For example, in terms of different norm minimizations used in sparsity constraints, the methods can be roughly categorized into five groups: sparse representation with l0 -norm minimization, sparse representation with lp -norm (0<p<1) minimization, sparse representation with l1 -norm minimization and sparse representation with l2,1 -norm minimization. In this paper, a comprehensive overview of sparse representation is provided. The available sparse representation algorithms can also be empirically categorized into four groups: greedy strategy approximation, constrained optimization, proximity algorithmbased optimization, and homotopy algorithm-based sparse represeห้องสมุดไป่ตู้tation. The rationales of different algorithms in each category are analyzed and a wide range of sparse representation applications are summarized, which could sufficiently reveal the potential nature of the sparse representation theory. Specifically, an experimentally comparative study of these sparse representation algorithms was presented. The Matlab code used in this paper can be available at: /lunwen.html. Index Terms—Sparse representation, compressive sensing, greedy algorithm, constrained optimization, proximal algorithm, homotopy algorithm, dictionary learning