DOC PREVIEW
IMPROVED SPARSE APPROXIMATION

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IMPROVED SPARSE APPROXIMATION OVER QUASI-INCOHEmNT DICTIONARIES J A. Ropp' A. C. Gilbed S. Mtrthiilcrishnnd M. J Str.auss§ ABSTRACT This paper discusses a new greedy algorithm for solving the sparse approximation problem over quasi-incoherent dictio- naries. These dictionaries consist of waveforms that are un- correlated "on average," and they provide a natural general- ization of incoherent dictionaries. The algorithm provides strong guarantees on the quality of the approximations it produces, unlike most other methods for sparse approxima- tion. Moreover, very efficient implementations are possible via approximate nearest-neighbor data structures. 1. INTRODUCTION Sparse approximation is the problem of finding a concise representation of a given signal as a linear comhination of a few elementary signals chosen from a rich collection. It has shown empirical promise in image processing tasks such as feature extraction, because thc approximation cannot suc- ceed unless it discovers structure latent in the image. For example, Starck, Donoho and Candts have used sparse ap- proximation to extract features from noisy astronomical pho- tograph and volumetric data [I]. Nevertheless, it has been difficult to estahlish that proposed algorithms actually solve the sparse approximation problem. This paper makes an- other step in that direction by describing a greedy algorithm that computes solutions with provable quality guarantees. A dicrionay 9 for the signal space Rd is a collection of vectnrs that spans the entire space. The vectors are called atonrs, and we write them as 'PA. The index X may parame- terize the timekcale or time/frequency localization of each atom, or it may he a label without any additional meaning. The number of atoms is often much larger than the signal dimension. The sparse approxinialion problem with respect to 59 is to compute a good representation of each input signal as a short linear comhination of atoms. Specifically, for an 'Inst. for Comp. Eng. and Sci. (ICES), The University of Texas at Austin, Austin, TX 78712, j troppaices.utexas .edu. tAT&T Labs-Research. 180 Park Avenue, Flurham Park, NJ 07932. a9ilbertaresearch.att.com iAT&l Labs-Research & Rutgers University, 180 Park Avenue, Plorllam Park, NJ 07932. muthuoresearch.att .con. Supported in part by NSF CCR 00-87022 and NSF 1lR 0220280. gAT&T Labs-Rzsearch, 180 Park Avenue, Florham Park. NI 07932. mstrauss~research.att.com. arbitrary signal z, we search for an na-term superposition aopt = h 'PA A,,, which minimizes 111 - aupfIJp. We must determine both the optimal vectors, 711 atoms whose indices are listed by as well as the optimal coefficients bA. If 9 is an orthonormal basis, it is computationally easy to find aopt. For the indices AOpt, simply take ni atoms with the largest inner products /(I, pA)l and form A,,, Unfortunately, it can he difficult or impossible to choose an appropriate orthonormal basis for a given situation. For example, if the signals contain both harmonic and impulsive components, a single onhonormal basis will not represent them both efficiently. We have much more freedom with a redundant dictionary, since it may include a rich collection of waveforms which can provide concise representations of many different structures. The price that we pay for additional flexibility is an increased cost to determine these concise representations. For general redundant dictionaries, it is computationally in- feasible to search all possible m-term representations. In fact, if 9 is an arbitrary dictionary, finding the hest m- term representation of an arbitrary signal is NP-hard [2]. There are algorithms with provable approximation guaran- tees for specific dictionaries, e.g. Villemoes' algorithm for Haar wavelet packets [3]. There are also some well-known heuristics, such as Matching Pursuit (MP) [4], Onhogo- nal Matching Pursuit (OMP) [SI and m-fold Matching Pur- suit [6]. Several other methods rely on the Basis Pursuit paradigm, which advocates minimizing the C, norm of the coefficients in the representation instead of minimizing the sparsity directly [7]. Some theoretical progress has already been made for dictionaries with low coherence. The coherence parame- ter )I equals the maximal inner product hetween two dis- tinct atoms. For example, the union of spikes and sines is a dictionary with p = m. The authors in [6] have pre- sented an efficient two-stage algorithm for the approximate representation of any signal over a sufficiently incoherent 0-7803-7750-8/03/$17.00 02003 IEEE 1-37dictionary. This is the first known algorithm which prov- ably approximates the solution to the sparse problem for any class of general dictionaneS. In addition, this algorithm is highly efficient. For a suitably incohcrent dictionaty, it is also known that Basis Pursuit can resolve the subclass of signals which have an aact sparse representation [SI. This article offers a number of improvements to [h]. Specifically, we present a modified version of the algorithm in [h], which calculates significantly more accurate sparse representations for incoherent dictionaries and also applies to a much larger class of redundant dictionaries. Unlike an incoherent dictionary where all the inner products are small, the dictionaries we consider only need to have small inner products "on average." In addition, our analysis is simpler, Of course, the new algorithm can he implemented just as efficiently as the ones in [GI. 2. ALGORITHM AND ANALYSIS 2.1. Overall results For an incoherent dictionary, we have Theorem 1 Fi.r a dictionarv 9 M~illi coherence /i, 14'2 seek UN m-term rrpirsentution of an arbitran signal x, Mherr ni < +I$,-'. There is air ulgorirlrai rharpiudiices an III-~~~W rrpresentalion a,,, for x with enur In comparison, the algorithm of [6] requires that 111 < - :;2 /L-' and produces approximations with error When rii 5 ~i-"~, the resulting constant of approximation is ahout 46. Meanwhile, the algorithm descrihed here pro- duces approximatitins with error IIz - a,, 112 i 3 11% - aol,t.l12 so long as 4 5 'in 6 /A-'/*. Theorem 1 is a special case of a result for general dic- tionaries. To state the full theorem, we need to horrow a definition from [9]. Let @he a matrix whose columns are the atoms of 9. Then the Gram matrix G %' Q* Q contains all the inner products hetween atoms!. Let I GI denote its en- ttywise


IMPROVED SPARSE APPROXIMATION

Download IMPROVED SPARSE APPROXIMATION
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view IMPROVED SPARSE APPROXIMATION and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view IMPROVED SPARSE APPROXIMATION 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?