DOC PREVIEW
Explicit Capacity-Achieving List-Decodable Codes

This preview shows page 1-2-20-21 out of 21 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IntroductionBackground and ContextOur ResultOther consequencesOrganization of the paperOverview of Proof TechniquePreliminaries and NotationA Result on Polynomials over Finite FieldsFolded Reed-Solomon codes and their decodingThe constructionThe Parvaresh-Vardy codeConnection between FRS and CRS codesList decoding FRS codesAchieving the list decoding capacityExtensions and Codes over Smaller AlphabetsExtension to list recoveringBinary codes decodable up to Zyablov boundCapacity-Achieving codes over smaller alphabetsConcluding RemarksExplicit Capacity-Achieving List-Decodable CodesorDecoding Folded Reed-Solomon Codes up to their DistanceVenkatesan Guruswami∗Atri Rudra†Department of Computer Science and EngineeringUniversity of WashingtonSeattle, WA 98195AbstractFor every 0 < R < 1 and ε > 0, we present an explicit construction of error-correctingcodes of rate R that can be list decoded in polynomial time up to a fraction (1 − R − ε) oferrors. These codes achieve the “capacity” for decoding from adversarial errors, i.e., achieve theoptimal trade-off between rate and error-correction radius. At least theoretically, this meets oneof the central challenges in coding theory.Prior to this work, explicit codes achieving capacity were not known for any rate R. In fact,our codes are the first to beat the error-correction radius of 1−√R, that was achieved for Reed-Solomon codes in [11], for all rates R. (For rates R < 1/16, a recent breakthrough by Parvareshand Vardy [14] improved upon the 1 −√R bound; for R → 0, their algorithm can decode afraction 1 − O(R log(1/R)) of errors.)Our codes are simple to describe — they are certain folded Reed-Solomon codes, which are infact exactly Reed-Solomon (RS) codes, but viewed as a code over a larger alphabet by carefulbundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature ofour result, since the codes we propose are not too far from the ones in actual use.The main insight in our work is that some carefully chosen folded RS codes are “com-pressed” versions of a related family of Parvaresh-Vardy codes. Further, the decoding of thefolded RS codes can be reduced to list decoding the related Parvaresh-Vardy codes. The al-phabet size of these folded RS codes is polynomial in the block length. This can be reducedto a (large) constant using ideas concerning “list recovering” and expander-based codes from[9, 10]. Concatenating the folded RS codes with suitable inner codes also gives us polytimeconstructible binary codes that can be efficiently list decoded up to the Zyablov bound.∗Research supported in part by NSF grant CCF-0343672 and a Sloan Research Fellowship.†Research supported by NSF grant CCF-0343672.1 Introduction1.1 Background and ContextError-correcting codes enable reliable communication of messages over a noisy channel by clev-erly introducing redundancy into the message to encode it into a codeword, which is then trans-mitted on the channel. This is accompanied by a decoding procedure that recovers the correctmessage even when several symbols in the transmitted codeword are corrupted. In this work,we focus on the adversarial or worst-case model of errors — we do not assume anything abouthow the errors and error locations are distributed beyond an upper bound on the total number oferrors that may be caused. The central trade-off in this theory is the one between the amount ofredundancy needed and the fraction of errors that can be corrected. The redundancy is measuredby the rate of the code, which is the ratio of the the number of information symbols in the messageto that in the codeword — thus, for a code with encoding function E : Σk→ Σn, the rate equalsk/n. The block length of the code equals n, and Σ is its alphabet.The goal in decoding is to find, given a noisy received word, the actual codeword that it couldhave possibly resulted from. If we target correcting a fraction ρ of errors (ρ will be called theerror-correction radius), then this amounts to finding codewords within (normalized Hamming)distance ρ from the received word. We are guaranteed that there will be a unique such codewordprovided the distance between every two distinct codewords is at least 2ρ, or in other words therelative distance of the code is at least 2ρ. However, since the relative distance δ of a code mustsatisfy δ 6 1 − R where R is the rate of the code (by the Singleton bound), the best trade-offbetween ρ and R that unique decoding permits is ρ = ρU(R) = (1 − R)/2. But this is an overlypessimistic estimate of the error-correction radius, since the way Hamming spheres pack in space,for most choices of the received word there will be at most one codeword within distance ρ fromit even for ρ much greater than δ/2. Therefore, always insisting on a unique answer will precludedecoding most such received words owing to a few pathological received words that have morethan one codeword within distance roughly δ/2 from them.A notion called list decoding, that dates back to the late 1950’s [3, 19], provides a clean way toget around this predicament, and yet deal with worst-case error patterns. Under list decoding, thedecoder is required to output a list of all codewords within distance ρ from the received word. Letus call a code C (ρ, L)-list decodable if the number of codewords within distance ρ of any receivedword is at most L. To obtain better trade-offs via list decoding, we need (ρ, L)-list decodablecodes where L is bounded by a polynomial function of the block length, since this an a priorirequirement for polynomial time list decoding. How large can ρ be as a function of R for whichsuch (ρ, L)-list decodable codes exist? A standard random coding argument shows that we canhave ρ > 1 − R − o(1) over large enough alphabets, cf. [20, 4], and a simple counting argumentshows that ρ must be at most 1 − R. Therefore the list decoding capacity, i.e., the information-theoretic limit of list decodability, is given by the trade-off ρcap(R) = 1 − R = 2ρU(R). Thus listdecoding holds the promise of correcting twice as many errors as unique decoding, for every rate.The above-mentioned list decodable codes are non-constructive. In order to realize the poten-tial of list decoding, one needs explicit constructions of such codes, and on top of that, polynomialtime algorithms to perform list decoding. After essentially no progress in this direction in over 30years, the work


Explicit Capacity-Achieving List-Decodable Codes

Download Explicit Capacity-Achieving List-Decodable Codes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Explicit Capacity-Achieving List-Decodable Codes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Explicit Capacity-Achieving List-Decodable Codes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?