Berkeley ELENG 121 - Lecture 6: Rate Efficient Reliable Communication

Unformatted text preview:

ECE461: Digital CommunicationsLecture 6: Rate Efficient Reliable CommunicationIntroductionWe now move to rate efficient reliable communication (energy efficiency tends to come for freein this scenario). In this lecture we see that there are block communication schemes smarterthan the naive repetition coding seen earlier that promise arbitrarily reliable communicationwhile still having a non-zero data rate. We begin by setting the stage for studying rateefficient reliable communication by carefully dividing the transmitter strategy of mappingthe information bits to transmit voltages into two distinct parts:1. maps information bits into coded bits by adding more redundancy: the number of codedbits is larger than the number of information bits and the ratio is called the codingrate. This process is generally called coding.2. map coded bits directly into transmit voltages. This is done sequentially: for in-stance, if only two transmit voltages are allowed (±√E) then every coded bit is se-quentially mapped into one transmit voltage. If four possible transmit voltages areallowed (±√E, ±√E3), then every two consecutive coded bits are mapped into a singletransmit voltage sequentially. This mapping is typically called modulation and can beviewed as a labeling of the discrete transmit voltages with a binary sequence.The receiver could also be potentially broken down into two similar steps, but in this lecturewe will continue to focus on the ML receiver which maps the received voltages directly intoinformation bits. Focusing on a simple binary modulation scheme and the ML receiver, wesee in this lecture that there are plenty of good coding schemes: in fact, we will see thatmost coding schemes promise arbitrarily reliable communication provided they are decodedusing the corresponding ML receiver!Transmitter Design: Coding and ModulationWe are working with an energy constraint of E, so the transmit voltage is restricted tobe within ±√E at each time instant. For simplicity let us restrict that only two transmitvoltages are possible: +√E and −√E.1If we are using T time instants to communicate, this means that the number of codedbits is T , one per each time instant. With a coding rate of R, the number of informationbits (the size of the data packet) is B = RT . Surely, R ≤ 1 in this case. The scenario ofR = 1 exactly corresponds to the sequential communication scheme studied in Lecture 4. Aswe saw there, the reliability level approaches zero for large packet sizes. The point is thateven though we have spaced the transmit voltages f ar enough apart (the spacing is 2√E inthis case), the chance that at least one of the bits is decoded incorrectly approaches unitywhen there are a lot of bits. The idea of introducing redundancy between the number ofinformation bits and coded bits (by choosing R < 1) is to ameliorate exactly this problem.1We will explore the implications of this restriction in a couple of lectures from now.1Linear CodingAs we have seen, coding is an operation that maps a sequence of bits (information bits,specifically) to a longer sequence of bits (the coded bits). While there are many types ofsuch mappings, the simplest one is the linear code. This is best represented mathematicallyby a matrix C whose elements are drawn from {0, 1}:vector ofcoded bits= Cvector ofinformationbits. (1)Here the vector space operations are all done on the binary field {0, 1}: i.e., multiplicationand addition in the usual modulo 2 fashion. The dimension of the matrix C is T × RT andit maps a vector of dimension RT × 1 (the sequence of information bits) into a vector ofdimension T × 1 (the sequence of coded bits). The key problem is to pick the appropriatecode C such that the unreliability with ML decoding at the receiver is arbitrarily small. Inthis lecture we will see that almost all matrices C actually have this property!A Novel ApproachTo study this we will consider the set C of all possible binary matrices C: there are 2RT2number of them (each entry of the matrix can be 0 or 1 and there are RT2entries in thematrix). We will show that the average unreliability, averaged over all the matrices C:P [E]def=12RT2XC∈CP [E|C] , (2)is arbitrarily small for large packet sizes B (and hence large time T ). In Equation (2) we haveused the notation P [E|C] to denote the unreliability of communication w ith the appropriateML receiver over the AWGN channel when using the code C at the transmitter.2If P [E]is arbitrarily s mall, then most code matrices C must have an error probability that is alsoarbitrarily small. In fact, only at most a polynominal (in RT ) number of codes can havepoor reliability.Calculating Average UnreliabilityThis unreliability level is the average unreliability experienced, averaged over all possibleinformation bit sequences:P [E|C] =12RT2RTXk=1P [E|Bk, C] , (3)where Bkis the kthinformation packet B (there are 2B= 2RTpossible information packets).The error event E occurs when the likelihood of the T received voltages is larger for some2Keep in mind that the ML receiver will, of course, depend on the code C used at the transmitter.2other packet Bjwith j 6= k. The probability of this event will depend on the nature of thecode C and is, in general, quite complicated to write down precisely. As in the previouslecture, we will use the union bound to get an upper bound on this unreliability level:P [E|Bk, C] <2RTXj6=k,j=1P2[Bk, Bj|C] , (4)where we have denoted P2[Bk, Bj|C] as the probability of mistakenly concluding that Bjisthe information packet when actually Bkwas transmitted using the code C.Substituting Equations (4) and (3) into Equation (2) we getP [E] <12RT2RTXk=12RTXj6=k,j=1 12RT2XC∈CP2[Bk, Bj|C]!. (5)The Key CalculationWe now come to the key step in this whole argument: evaluating the value of the expression12RT2XC∈CP2[Bk, Bj|C] . (6)From our derivation in Lecture 4, we know that the error probabilityP2[Bk, Bj|C] = Qd2σ(7)where d is the Euclidean distance between the vectors of transmitted voltages correspondingto the information packets Bk, Bjusing the code C. Since we have only a binary transmitvoltage choice, the distance d simply depends on the number of time instants where thecoded bits corresponding to the information packets Bk, Bjusing the code C are different.Suppose the coded bits are different at ` of the possible T locations. Then the Euclideandistance squaredd2= `4E. (8)The idea is that for each time where the coded bits


View Full Document

Berkeley ELENG 121 - Lecture 6: Rate Efficient Reliable Communication

Documents in this Course
Load more
Download Lecture 6: Rate Efficient Reliable Communication
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 6: Rate Efficient Reliable Communication and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 6: Rate Efficient Reliable Communication 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?