Unformatted text preview:

OutlineTransmitters (week 1 and 2)Digital Communication System:Transmitter TopicsIncreasing Information per BitIncreasing Noise ImmunityIncreasing bandwidth EfficiencySlide 8Mathematical Models of SourcesDiscrete SourcesDiscrete Memoryless Source (DMS)Stationary SourceAnalog SourcesInformation in a DMS letterExamplesDiscussionAverage InformationSlide 18Average Information (Entropy)Slide 20Slide 21Outline•Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2)•Receivers (Chapter 5) (week 3 and 4) •Received Signal Synchronization (Chapter 6) (week 5)•Channel Capacity (Chapter 7) (week 6)•Error Correction Codes (Chapter 8) (week 7 and 8)•Equalization (Bandwidth Constrained Channels) (Chapter 10) (week 9)•Adaptive Equalization (Chapter 11) (week 10 and 11)•Spread Spectrum (Chapter 13) (week 12)•Fading and multi path (Chapter 14) (week 12)Transmitters (week 1 and 2)•Information Measures•Vector Quantization•Delta Modulation•QAMDigital Communication System:TransmitterReceiverInformation per bit increasesnoise immunity increasesBandwidth efficiency increasesTransmitter Topics•Increasing information per bit•Increasing noise immunity •Increasing bandwidth efficiencyIncreasing Information per Bit•Information in a source–Mathematical Models of Sources–Information Measures•Compressing information–Huffman encoding•Optimal Compression?–Lempel-Ziv-Welch Algorithm•Practical Compression•Quantization of analog data–Scalar Quantization–Vector Quantization–Model Based Coding–Practical Quantization• -law encoding• Delta Modulation• Linear Predictor Coding (LPC)Increasing Noise Immunity •Coding (Chapter 8, weeks 7 and 8)Increasing bandwidth Efficiency•Modulation of digital data into analog waveforms–Impact of Modulation on Bandwidth efficiencyIncreasing Information per Bit•Information in a source–Mathematical Models of Sources–Information Measures•Compressing information–Huffman encoding•Optimal Compression?–Lempel-Ziv-Welch Algorithm•Practical Compression•Quantization of analog data–Scalar Quantization–Vector Quantization–Model Based Coding–Practical Quantization• -law encoding• Delta Modulation• Linear Predictor Coding (LPC)Mathematical Models of Sources•Discrete Sources–Discrete Memoryless Source (DMS)•Statistically independent letters from finite alphabet–Stationary Source•Statistically dependent letters, but joint probabilities of sequences of equal length remain constant•Analog Sources–Band Limited |f|<W •Equivalent to discrete source sampled at Nyquist = 2W but with infinite alphabet (continuous)Discrete SourcesDiscrete Memoryless Source (DMS)–Statistically independent letters from finite alphabete.g., a normal binary data stream X might bea series of random events of either X=1, or X=0P(X=1) = constant = 1 - P(X=0)e.g., well compressed data, digital noiseStationary Source–Statistically dependent letters, but joint probabilities of sequences of equal length remain constante.g., probability that sequenceai,ai+1,ai+2,ai+3=1001 when aj,aj+1,aj+2,aj+3=1010 is always the sameApproximation uncoded for textAnalog Sources•Band Limited |f|<W –Equivalent to discrete source sampled at Nyquist = 2W but with infinite alphabet (continuous)Information in a DMS letter•If an event X denotes the arrival of a letter xi with probability P(X=xi) = P(xi) the information contained in the event is defined as:I(X=xi) = I(xi) = -log2(P(xi)) bitsI(xi)P(xi)Examples•e.g., An event X generates random letter of value 1 or 0 with equal probability P(X=0) = P(X=1) = 0.5then I(X) = -log2(0.5) = 1 or 1 bit of info each time X occurs•e.g., if X is always 1 then P(X=0) = 0, P(X=1) = 1then I(X=0) = -log2(0) = and I(X=1) = -log2(1) = 0Discussion•I(X=1) = -log2(1) = 0Means no information is delivered by X, which is consistent with X = 1 all the time.•I(X=0) = -log2(0) = Means if X=0 then a huge amount of information arrives, however since P(X=0) = 0, this never happens.Average Information•To help deal with I(X=0) =  , when P(X=0) = 0 we need to consider how much information actually arrives with the event over time. •The average letter information for letter xi out of an alphabet of L letters, i = 1,2,3…L, isI(xi)P(xi) = -P(xi)log2(P(xi))Average Information•Plotting this for 2 symbols (1,0) we see that on average at most a little more than 0.5 bits of information arrive with a particular letter, and that low or high probability letters generally carry little information.Average Information (Entropy)•Now lets consider average information of the event X made up of the random arrival of all the letters xi in the alphabet.•This is the (sum of) average information arriving with each bit.LiiiLiiixPxPxPxIXH121))((log)()()()(Average Information (Entropy)•Plotting this for L = 2 we see that on average at most 1 bit of information is delivered per event, but only if both symbols arrive with equal probability.Average Inform ation of Event w irth 2 letters vs Probability of first letter in the alphabet x10.00E+002.00E-014.00E-016.00E-018.00E-011.00E+001.20E+000.00E+00 5.00E-01 1.00E+00Average Information (Entropy)•What is best possible entropy for multi symbol code?)(log))((log)()(212LxPxPXHLiiiSo multi bit binary symbols of equally probable random bits will equal the most efficient information carriersi.e., 256 symbols made from 8 bit bytes is OK from information


View Full Document

Duke ECE 283 - Lecture

Documents in this Course
Load more
Download Lecture
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?