MIT 6 897 - Algorithmic Introduction to Coding Theory

Unformatted text preview:

6.897 Algorithmic Introduction to Coding Theory September 30, 2002Lecture 7Lecturer: Madhu Sudan Scribe: Jonathan Kelner, some sections by Adam SmithToday we’ll describe a new family of codes, called algebraic-geometry codes. These codes generalizeReed-Solomon codes and yield surprisingly good codes over constant sized alphabets. We’ll motivatethe codes from two points of view. First we’ll show how they challenge the “conjectured converse of theGilbert-Varshamov bound”. We’ll then motivate the codes from the algebraic perspective.The reader is warned that this lecture is not self-contained. It describes the “nature” of the codeswithout showing how to construct them, or why they even exist.1 Motivation 1: Getting Better Parametersq-ary Codes Today we shall concentrate on q-ary codes. Our alphabet will be Fq, the finite field ofsize q. Our codes will be subsets C ⊆ Fnq.The Singleton Bound The Singleton bound that we proved for binary alphabets can be establishedin this setting as well. Suppose C ⊆ Fnqand that |C| > qk−1. By the pigeonhole principle, the projectionof C onto the first k − 1 coordinates will have to send some two points to the same value. This showsthat dist(C) ≤ n−(k −1). We thus have, for any alphabet, a tradeoff between rate and relative distance:For any code,R ≤ 1 −δ.We would like to know how close to this bound we can get.The q-ary Gilbert-Varshamov Construction One way to approach this bound is to perform aq-ary analogue of the Gilbert-Varshamov construction by greedily constructing random codes. Startwith the origin in Fnq. Choose a second point that has distance greater than d from the origin, choose athird point that has distance greater than d from the first two points, etc. This yields codes withqk= |C| ≈qnVolq(d, n). (1)By noting that, for large n, most of the volume of Bq(0, r) comes from points at distance exactly rfrom the origin (as opposed to distance less than r), we obtainVolq(d, n) ≈nd(q − 1)d1n−d≈ qnHq(d/n), (2)where Hq(p) is the q-ary entropy:Hq(p) = p logqq − 1p+ (1 − p) logq11 − p.Combining equations (1) and (2), we obtainqk≈ qn(1−Hq(δ)),so that there exist codes withR ≥ 1 − Hq(δ).7-1The q-ary Gilbert-Varshamov Bound If we fix δ and let q tend to ∞, we getHq(δ) = δ logq(q − 1) +1log qH2(δ)= δ +H2(δ)log q+ O(1q log q)using the fact thatlog(q − 1)log q= 1 −log q − log(q − 1)log q≈ 1 − 1/(q log q)≈ δ + O(1log q)This means that for fixed δ, random codes will more or less achieveR = 1 − δ − O(1/ log q).(Note: All logarithms are base 2 unless noted otherwise).Note that the Singleton bound shows that no code can achieve R > 1 − δ. The above bound showsthat random (linear) codes approach the Singleton bound with an inverse logarithmic deficit in thealphabet size. This means that we need an alphabet that is exponentially large in 1/(1 − R − δ).It is thus natural to ask whether we can do better, i.e., whether we can find codes that approach theSingleton bound but use smaller alphabets. We begin by noting that we have already found one suchfamily of codes.Reed-Solomon Codes Recall that Reed-Solomon codes met the Singleton bound exactly and did sowith an alphabet size of exactly n (for infinitely many choices of n). So Reed-Solomon codes seem toperform much better, although in this case one cannot really talk about q and n separately. With RScodes, we must have q ≥ n, and so q must go to ∞ with n. Nonetheless, we know that R = 1−δ −O(1/n)for RS codes (for any q ≥ n), and so we can wave our hands and claim that we getR = 1 − δ − O(1/q).So in effect the difference between 1 − δ and R is growing inversely in q, rather than inversely in thelogarithm of q. This motivates the question—can we somehow turn the Reed-Solomon intuition into aformal proof where we actually get to fix q and let n go to infinity and see the behavior of R vs. δ.algebraic geometry (AG) codes turn out to do exactly this.AG codes The constructions of AG codes in fact yieldR = 1 − δ −1√q − 1= 1 − δ − O(1/√q),for every even prime power q (i.e., q must equal p`for prime p and even integer `). These codes do notrequire that q scale with n i.e. in our “analysis” we may fix δ and q, and let n → ∞; then we can let qincrease and see how the parameters scale with q. While the codes do not achieve a deficit of an inversein q, they do get a polynomial decay in this deficit as a function of q. So it becomes clear that as q growsthis family of codes will outperform the Gilbert-Varshamov bound. Since the deficit functions are quiteexplicit, it is possible to compare them exactly and note that the function δ + 1/(√q −1) is smaller thanHq(δ) for δ =12and q ≥≈ 44. The smallest square larger than this number is 49 and so we get that forq ≥ 49 the algebraic geometry codes outperform random codes!7-22 Motivation 2: Generalizing Previous ConstructionsRecall that in previous classes we got codewords by taking multivariate polynomials and evaluating themat all points in Fmq(RS codes were the univariate case m = 1). Consider the univariate and bivariatecases with degree `:Univariate case: Yields [q2, `2, q2− `2]q2-code.Bivariate case: Yields [q2, `2, (q − `)2]q-code.Thus, reducing the alphabet size from q2to q cost us a reduction in the distance of 2`(q −`). Wheredoes this difference come from? Intuitively, this is because in the bivariate plane Fq× Fq, there aremany small subspaces that encode quite inefficiently. For example, if we take any axis-parallel line inthe plane. Knowing that a codeword is 0 on ` + 1 of the points means it must be 0 at all q points onthe line. Yet this code may still be non-zero elsewhere. Thus these q zeroes of the codeword only leadto ` + 1 linear constraints on the codeword - a deficit roughly of q −`.Another example of such a subspace is the circle x2+ y2= c for some constant c. One can provethat if the polynomial is 0 on 2` of the points, then it must 0 everywhere on that circle—this could beup to 2q points, depending on c and on the field size.The big idea:The main idea of algebraic geometry codes is to not evaluate your polynomials atevery point in Fq×Fq, but only at some carefully chosen subset. We thus ask, what’s a good subset? Itwill turn out that a good answer is to choose some nice polynomial R(x, y), and evaluate our polynomialsat the points ofS = {(α, β)|R(α, β) = 0}.3 History of AG


View Full Document

MIT 6 897 - Algorithmic Introduction to Coding Theory

Download Algorithmic Introduction to Coding Theory
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Algorithmic Introduction to Coding Theory and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Algorithmic Introduction to Coding Theory 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?