DOC PREVIEW
MIT 18 310C - Supplementary Notes on BCH Codes

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Supplementary Notes on BCH Codes. We suppose we want to be able to send messages that can be conveniently decoded even in the presence of errors. First we considered the situation in which we want to be able to correct an error in a single bit. We want to correct errors by a uniform procedure regardless of what the original message before the error appeared is. To do so, we need to do something to the received message (that has the error) that eliminates the original message. We can then concentrate on the error, fix it, and then, once we have eliminated the error, we can decode the corrected message by inverting the encoding operation. 1. With a matrix code: . In a matrix code in which coding obeys the formula: c(m) = m E where E is the encoding matrix, we can do this by finding an appropriate matrix D obeyingi ED = a zero matrix. Then if the received message, r, contains one error, it has the form r = mE + <j> where <j> is a standard basis vector having a ‘1’ in the j-th position, for some as yet unknown j. If we form rD we get mED + <j>D which is, by definition, the j-th row of D, Once we know <j>D, we can check each row of D, locate which one is in error, and change the corresponding bit in r, 2. With a polynomial code using a primitive polynomial With such a code, defined by polynomial p(x), the encoding rule is cm(x) = m(x)p(x), and every code word has no remainder upon dividing by p(x). We can decode by computing a “power remainder table” that gives the remainder of each of the powers of x, upon dividing by p(x) up to some appropriate power. If there is only one error, that error, call it xe, will come entirely from the (single monomial) error. We will have r(x) = m(x)p(x) + xe. And the remainder of r(x) will be the remainder of xe. We can compare it to the remainders of powers in the remainder table. The remainder it agrees with will correspond to the power of x that is in error.This procedure seems quite different from the matrix procedure; instead of multiplying the received message by a decoding matrix, we want to divide its polynomial by c(x) and look at the remainder. But at the point that we have not corrected errors, we are not interested in the quotient of the division, but only in the remainder. And we can get the remainder of r(x) by adding up the remainders of its 1-bits. (since the remainder of a sum is the sum of remainders). Furthermore, this sum of reminders of 1-bits consists of the vector whose entries are the dot products of r(x) considered as a vector, with the (remainder ) columns of the remainder table. And this is exactly the same as taking the matrix product of the received word with a D matrix that consists of the remainder columns of the power-remainder table. In other words, the two procedures are really the same thing, and the D matrix for a polynomial single error correcting code is its power-remainder table. 3 Correcting more errors: The plan of BCH codes. When we want to correct more errors, taking the remainder of r(x) on dividing by p(x) will give is the remainder of the sum of the monomials of the various errors. To locate them we need to be able to extract more information about the errors than merely this remainder of their sum. Our plan will be to use an encoding polynomial that is the product of p(x) with some other polynomial which latter is chosen to give us additional information which will allow us to deduce exactly what the errors are. But what polynomial should we multiply p by? To understand the answer we first ask:how do we hope to decode? What exactly is the information we need to be able to do so? Well, we hope to mimic the process used in the single error case. In that case we found the remainder of r on dividing by p, and with it we can check each power xj to see it has the same remainder. (Just like the prince did with cinderella’s slipper.) In general, we want to check each power xj against something, and have that check tell us if xj is any one of the error monomials.We can phrase our single error procedure as solving the equation rem (y) = rem (r) among monomials y by checking each one of them, which is the same as checking the equation rem( y – rem(r) ) =0. With two errors we want to check a similar equation, but one which gives the location of both of the actual error powers. Suppose the error is xe1 + xe2. We want to find a polynomial whose remainder vanishes at the monomials xe1 and xe2 and for no other monomials. What we want then is a magical polynomial, called the “error locator polynomial” which has the wonderful property that it gives 0 remainder for every monomial that is in error, and gives 0 remainder for no others. If we have it, we can check each monomial in it and change those for which it gives 0 remainder. And what is this polynomial? It is obviously (y – xe1) (y – xe2). (And you can write a similar “error locator polynomial” for any number of errors). (It is obvious that this polynomial gives a 0 remainder for each error monomial, because when we multiply the remainders of the factors one remainder factor will be the all 0 remainder. We will have to, and will, prove that there no other monomial solutions to this is equation, except these two.) So our task is to find this error locator polynomial. And how can we find it? To determine a polynomial, it is necessary and sufficient to determine its coefficients. Here we may write the polynomial out, using the distributive law, and find it becomes y2 + (xe1 + xe2)y + xe1+e2. In the two error case we are considering you can see that the first power coefficient has remainder that is that of r itself. So all we need to do to be able to decode is to find a factor to throw into the encoding polynomial that will allow us to determine the other coefficient, whose remainder on dividing by p will be the that of the monomial that is the product of the two error monomials. The various terms that occur in the error locator polynomial are given names; they are called the “elementary symmetric functions” of the error monomials. The k-th elementary symmetric function consists of the sum of the products of k distinct monomials, in all possible ways. Thus above, there are two terms for k=1 and 1 for k=2, because there are, we suppose, only two errors. There is no k=3 term because there is no product of all three errors. The k=0 term is always by definition, 1. Wewill denote the k-th elementary symmetric function as s(k)(x) and its remainder as


View Full Document

MIT 18 310C - Supplementary Notes on BCH Codes

Download Supplementary Notes on BCH Codes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Supplementary Notes on BCH Codes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Supplementary Notes on BCH Codes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?