DOC PREVIEW
MIT 6 454 - Linear Network Codes

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Linear Network Codes: A Unified Framework forSource, Channel, and Network CodingMinkyu Kim6.454 Fall 20031 IntroductionThis report is a summary of the issue of separation and code design for network data transmissionenvironments, as discussed in the paper by Effros et al. [1].It is often considered that the failure of source-channel separation in network environments isa crucial obstacle in applying information theoretic tools in networks. However, with a consistentframework for digital networks, the source-channel separation turns out to be more robust thanexisting counterexamples may suggest. More interestingly, the separation of source and channelcode design does not necessarily simplify the design of communication systems for digital net-works; rather such decomposition of a problem into modular tasks may increase complexity as thedecomposition imposes unnecessary conditions.The network coding literature assumes independent data bits and lossless links, hence endorsesa philosophy where source and channel coding are separated from network coding or routing.However, there are simple examples showing neither separate source-network coding nor sepa-rate channel-network coding techniques guarantee optimal performance. We further argue thatthe major challenge in design of network systems is the lack of separation of large networks intocanonical elements such as simple multiple access or broadcast networks, rather than the lack ofsource-channel separation in networks.2 PreliminariesThe network model to be considered here requires the same finite alphab et at all nodes and addition-ally allows erasures assumed to be channel-imposed, irreversible, and independent of the channelinput. We deal with two important types of networks: multiple access and broadcast networks,and we argue that random linear codes are asymptotically optimal for those setups. This approachmay b e viewed, in the simplest way, as a generalization of information theoretic results known forsingle-receiver source codes and for single-transmitter, single-receiver channel codes. Also, fromthe networking perspective, it has an interpretation that compression, channel coding, and routingare not separable functions.All results in this research are in their simplest forms. In particular, in all cases the source andchannel alphabets are binary, including the erasure symbol if needed, and al l results are stated foriid random processes. Also, all code constructions combine, for simplicity, random linear encodingwith typical set decoding. However, all of the results may generalize widely from the forms statedhere.13 Single-Transmitter, Single-Receiver NetworksGiven a single-transmitter, single-receiver network, source coding can be viewed as an extensionof network coding to application with statistically dependent input symbols. A network code issaid to accomplish optimal source coding on a noise-free network if that code can be used totransmit any source with entropy lower that the network capacity with asymptotically negligibleerror probability.Let us begin by showing that the expected error probability of a randomly chosen linear sourcecode with rate R tends to zero as n grows without bound for any source U with H(U) < R.The fixed-rate, linear encoder is independent of the source distribution, and we use distribution-dependent typical set decoders for simplicity.Let anbe an ⌈nR⌉ ×n matrix with coefficients in the binary field F2. The encoder for the linearsource code based on anisαn(un) = anu,where un= ut∈ (F2)nis an arbitrary source sequence with blocklength n. The correspondingdecoder isβn(v⌈nR⌉) =(unif un∈ A(n)ǫand anu = v and ∄ˆun∈ A(n)ǫ∩ {un} s.t. anˆu = vˆUnotherwise,where v⌈nR⌉) = vt∈ F⌈nR⌉2and decoding toˆUndenotes a random decoder output (which yields adecoding error by assumption). The error probability for source code anisPe(an) = Pr(βn(αn(Un)) 6= Un).Then, we have the following source coding theorem:Theorem 1 Let U1, U2, ..., Unbe drawn iid according to distribu tio n p(u). Let {An}∞n=1be asequence of rate-R linear source codes. Each Anis an ⌈nR⌉ × n matrix with coefficients drawn iidBernoulli(1/2). For any R > H(U), E[Pe(An)] → 0 as n → 0.While Theorem 1 shows that linear source codes are asymptotically optimal, it can be shown thatany fixed linear code yields statistically dependent output symbols. Therefore, linear source codescannot achieve the entropy bound for non-uniform sources since achieving the entropy bound wouldnecessarily yield an incompressible data sequence.Similarly to the case of source coding, channel coding also can be viewed as an extensionof network coding to unreliable channels. Prior network coding results treat the issue of robustcommunication against non-ergodic link failures, but here we investigate ergodic failures. We saythat a network code accomplished optimal channel coding on the given channel if the networkcode can be used to transmit any source with rate lower than the noisy channel capacity withasymptotically negligible error probability.For linear channel coding for the erasure channel, we use an n × ⌊n R⌋ linear generator matrixbnand a conceptually simple non-linear decoder. The linear channel encoder is defined byγ(v⌊nR⌋) = bnv.For any channel output yn= yt∈ {0, 1, E}n, define the decoder asδn(yn) =(vnif (bnv)i= yifor all i s.t. yi∈ F2and ∄ˆv 6= v s.t. (bnˆv )i= yifor all i s.t. yi∈ F2ˆV⌊n⌋otherwise,2where for any v ∈ F⌊nR⌋2, (bnv)iis the ith component of the vector bnv. Again, decoding toˆV⌊n⌋denotes a random decoder output. Then we have the following channel coding theorem:Theorem 2 Consider an erasure channel with input and output alphabets F2and {0, 1, E}, respec-tively. The erasure sequence Z1, Z2, ... is drawn iid according to distribution q(z), where Zi= 1denotes the erasure event, and Zi= 0 designates a successful transmission. The channel noiseis assumed independent of the channel input. Let {Bn}∞n=1describe a sequence of channel codes.Each Bnis an n × ⌊ nR⌋ matrix with elements chosen iid Bernoulli(1/2). If R < 1 − q(1), thenE[Pe(Bn)] → 0 as n → 0.Given source code anand channel code bn, the joint source-channel encoder multiplies thesource input by a single n × n matrix cn= bnanand transmits the output across the channel. Thecorresponding decoder is βn(δ(·)). As an alternative to this concatenating method, we can designa joint source-channel code at random


View Full Document

MIT 6 454 - Linear Network Codes

Documents in this Course
Load more
Download Linear Network Codes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Linear Network Codes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Linear Network Codes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?