DOC PREVIEW
MIT 6 454 - Study Guide

This preview shows page 1-2-3-4 out of 13 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 34, NO. 6, NOVEMBER 1988 1449 Capacity and Error Exponent for the Direct Detection Photon Channel-Part I AARON D. WYNER, FELLOW, IEEE Abstract-The capacity and error exponent of the direct detection optical channel are considered. The channel input in a T-second interval is a waveform A(r), 0 5 t I T, which satisfies 0 I A(t) I A, and (l/r)],,%(t) dt I aA, 0 < IT 11. The channel output is a Poisson process with intensity parameter X(t) + he. ‘The quantities A and CIA represent the peak and average power, respectively, of the optical signal, and X0 repre- sents the “dark current.” In Part I the channel capacity of this channel and a lower bound on the error exponent are calculated. An explicit construc- tion for an exponentially optimum family of codes is also exhibited. In Part II we obtain an upper bound on the error exponent which coincides with the lower bound. Thus this channel is one of the very few for which the error exponent is known exactly. DEDICATION These papers are dedicated to the memory of Stephen 0. Rice, an extraordinary mentor, supervisor, and friend. He was a master of numerical methods and asymptotics and was very much at home with the nineteenth-century menagerie of special functions. The generous and easy way in which he shared his genius with his colleagues is leg- endary, and I was fortunate to have been a beneficiary of his advice and expertise during the first decade of my career at Bell Laboratories. As did all of Steve’s colleagues, I learned much from this gentle and talented man. We will remember him always. I. INTRODUCTION T HIS IS THE first of a two-part series on the capacity and error exponent of the direct-detection optical channel. Specifically, in the model we consider, informa- tion modulates an optical signal for transmission over the channel, and the receiver is able to determine the arrival time of the individual photons which occur with a Poisson distribution. Systems based on this channel have been discussed widely in the literature [l]-[5] and are of impor- tance in applications. The channel capacity of our channel was found by Kabanov [3] and Davis [2] using martingale techniques. In the present paper we obtain their capacity formula using an elementary and intuitively appealing method. We also obtain a “random coding” exponential upper bound on the probability of error for transmission at rates less than Manuscript received March 10, 1988. This work was originally pre- sented at the IEEE International Symposium on Information Theory, Brighton, England, June 1985. The author is with AT&T Bell Laboratories, Murray Hill, NJ 07974. IEEE Log Number 8824874. capacity. In Part II [8], we obtain a lower bound on the error probability which has the same asymptotic exponen- tial behavior (as the delay becomes large with the transmis- sion rate held fixed) as the upper bound. Thus this channel joins the infinite bandwidth additive Gaussian noise chan- nel as the only channel for which the “error exponent” is known exactly for all rates below capacity. In Section IV of the present paper we also give an explicit construction of a family of codes for use on our channel, the error probability of which has the optimal exponent. Here too our channel and the infinite bandwidth additive Gaussian noise channel are the only two channels for which an explicit construction of exponentially optimal codes is known. Precise Statement of the Problem and Results The channel input is a waveform h(t), 0 I t < cc, which satisfies OGqt) IA, (q where the parameter A is the peak power. The waveform A( .) defines a Poisson counting process v(t) with “inten- sity” or (“rate”) equal to X(t)+ X,, where X, > 0 is a background noise level (sometimes called “dark current”). Thus the process v(t), 0 I t < co, is the independent-incre- ments process such that v(0) = 0, (1.2a) and,forO<r, tcco, -AAj Pr{v(t+7)--V(t)=j} =+, j=o,1,2;.. (1.2b) where A=/‘+T(A(t’)+h,)dt’. t (1.2c) Physically, we think of the jumps in v( .) as corresponding to photon arrivals at the receiver. We assume that the receiver has knowledge of v(t), which it would obtain using a photon-detector. For any function g(t), 0 I t < co, let g,” denote {g(t): a I t I b}. Let S(T) denote the space of (step) functions g(t), 0 I t I 7, such that g(0) = 0, g(t) E {0,1,2, * * * }, g(t) t . Therefore, vO, ’ the Poisson counting process defined above, takes values in S(T). OOlS-9448/88/1100-1449$01.00 01988 IEEE1450 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 34, NO. 6, NOVEMBER 1988 A code with parameters (M, T, a, P,) is defined by the following: a) a set of M waveforms X,(t), 0 I t I T, which sat- isfy the “peak power constraint” (1.1) and the “average power constraint” ;/, 0 ‘h, t dt<aA (l-3) (of course, 0 I a Il); b) a “decoder” mapping D: S(T) + {1,2; . *, M}. The overall error probability is where the conditional probabilities in (1.4) are computed using (1.2) with X(a) = A,(.). A code as defined above can be used in a communica- tion system in the usual way to transmit one of M mes- sages. Thus when h,(t) corresponding to message m, 15 m I M, is transmitted, the waveform v(t), 0 I t < T, is received, and is decoded as D(vT). Equation (1.4) gives the “word error probability,” the probability that D( v,‘) f m when message m is transmitted, averaged over the M messages (which are assumed to be equally likely). The rate of the code (in nats per second) is (l/T)ln M. Let A, h,, a be given. A rate R 2 0 is said to be achiev- able if, for all E > 0, there exists (for T sufficiently large) a code with parameters (M, T, a, P,) with M 2 eRT and P, I E. The channel capacity C is the supremum of achiev- able rates. In Section II we establish the following theo- rem, which was found earlier by Kabanov [3] and Davis [2] using less elementary methods. Theorem I: For A, A,, a 2 0, C=A[q*(l+s)ln(l+s))+(l-q*)slns -(q*+s)ln(q*+s)] (1.5a) where s = X,/A, (1.5b) q*=bn(a,q,(s)), (1.5c) and q&) = “+$;:“’ -s. (1.5d) For the


View Full Document

MIT 6 454 - Study Guide

Documents in this Course
Load more
Download Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?