##
This **preview** shows page *1-2-3-27-28-29*
out of 29 **pages**.

*View Full Document*

End of preview. Want to read all 29 pages?

Upload your study docs or become a GradeBuddy member to access this document.

View Full Document**Unformatted text preview:**

Topic 2:Mathematical Models of CompressionAssumptions:• Signal is discrete time or space (e.g., alreadysampled).• Do not separate out signal decompositions,i.e., assume either done already or to be doneas part of the code.• Consider a code structure that maps blocks orvectors of input data into possibly variablelength binary strings.Will later consider other code structures, whichcan be recursive and have memory. The fixedblock structures can, however, model such codesby allowing local behavior that is recursive.44Input SignalAssume discrete-time random process inputsignal: {Xn} e.g., each Xnmight be a realnumber, a block of pixels, 50 speech samples, asingle binary symbol or N binary symbols. Wewill assume that each Xnis a k-dimensionalvector with alphabet A ⊂<k.Implies know distributions PXkfor all vectorsXk=(X0,X1,...,Xk−1), k =1,2,....Use superscript notation when dimension notclear from context.45Usually assume some form of stationarity (strict,asymptotic, etc.)Arguing over “stationarity” of real data is a redherring, the theory can handle very generalprocesses (with more complicated “universal” or“adaptive” codes).For simplicity, assume strictly stationary. So allXkn=(Xnk,Xnk+1,...,X(n+1)k−1aredistributed as a generic random vector Xk,saywith probability distribution PXk.Drop the superscript k if it is clear from context.46EncoderAn encoder (or source encoder) α is a mappingof the input vectors into a collection W of binarysequences.W⊂{0,1}∗, the space of all possible binarysequences.W is called the channel codebook and itsmembers are referred to as channel codewords.It is the set of binary sequences that will bestored or transmitted and used by the decoder toreconstruct the source as well as it can.Thus an encoder is a mapping α : A →W.Applying the encoder to an input X yields acodeword i = α(X).47Given an i ∈{0,1}∗, definel(i) = length of binary vector ie.g.,l(0)=1,l(101) = 3,l(1011000) = 7Define the instantaneous rate of a binary vectori byr(i)=ikin bits per input symbol.The average rate or average codeword length ofthe encoder applied to the source is defined byR(α, W)=E[r(α(X))].An encoder is said to be fixed length or fixedrate if all channel codewords have the samelength, i.e., if l(i)=Rk for all i ∈W.48The property of fixed or variable rate codes canhave important implications in practice.• Variable rate codes can cost more as they mayrequire data buffering if the encoded data isto be transmitted over a fixed ratecommunication channel.• Harder to synchronize variable-rate codes.Single errors in decoding or lost or added bitson the channel can have catastrophic effects.• Buffers can overflow (causing data loss) orunderflow (causing wasted time orbandwidth).But variable rate codes can provide superiorrate/distortion tradeoffs.E.g., in image compression can use more bits foredges, fewer for flat areas. In voice compression,more bits for plosives, fewer for vowels.49Define a source decoder β : W→ˆAusuallyˆA = ADecoder is a table lookup.Define the reproduction codebookC≡{β(i); i ∈W}members of C called reproduction codewords ortemplates.Often convenient to reindex codebook usingintegers asC≡{βl;l=0,1,...,M −1}where M is the number of reproductioncodewords:M = ||W||A source code or compression code for thesource {Xn} consists of a triple (α, W,β)ofencoder, channel codebook, and decoder.Will see other, equivalent, representations.50ENCODERXnin= α(Xn)- -DECODERinˆXn= β(in)- -Aα→Wβ→C (1)or, equivalently,Xα→ i = α(X)β→ˆX = β(α(X)). (2)General block memoryless source codeLater consider codes with memory, but generalblock might operate in local nonmemorylessfashion.51A source code is invertible or noiseless orlossless ifβ(α(x)) = x(at least with probability 1). Alternatively, thecode is invertible ifβ = α−1Acodeislossy if it is not lossless. In this case ameasure of distortion d between input vector andreconstruction is required to measure theseriousness of the loss.52Quality and CostDistortionDistortion measure d(x, ˆx) measures thedistortion or loss resulting if an original input xis reproduced as ˆx.Mathematically: A distortion measure satisfiesd(x, ˆx) ≥ 0To be useful, d should be• easy to compute• tractable• meaningful for perception or application.53No single distortion measure accomplishes all ofthese goals, although the ubiquitous squarederror distortion defined byd(x, y)=||x − y||2=k−1Xl=0|xl− yl|2where x =(x0,x1,...,xk−1)t,y =(y0,y1,...,yk−1)taccomplishes the first twogoals and occasionally correlates with the third.54Weighted or transform/weighted versions areused for perceptual coding.d(X,ˆX)=(X−ˆX)∗BX(X−ˆX),BXpositive definite.most common BX= I,d(X,ˆX)=||X −ˆX||2(MSE)Other distortion measures of interest:Hamming distortion:dH(x, ˆx)=1−δx−ˆx=0ifx=ˆx1 otherwiseHamming weight or average Hamming distance:For vectors x =(x0,x1,...,xk−1)d(x, ˆx)=k−1Xl=0dH(xl, ˆxl)lpnormsd(x, y)=||x − y||2= |k−1Xl=0|xl− yl|p|1/p55Common assumption:d is an additive distortion measure: have adistortion measure d1(a, b) defined on “scalars”.For vector xk=(x0,x1,...,xk−1) definedk(xk, ˆxk)=k−1Xl=0d1(xl, ˆxl)Often normalize.If d(x, y)=0iffx=y, then zero distortioncoding is equivalent to lossless coding. We makethis assumption for convenience (if sourcediscrete).56If apply (α, W,β)tox, distortion isd(x, β(α(x))), does not depend explicitly on W.Performance of a compression system measuredby expected values of the distortion and rate.D(α, W,β)=D(α, β)=E[d(X, β(α(X))]R(α, W)=E(r(X)) =1kE(l(α(X)))The unit of rate is bits per input symbol.Occasionally drop normalization and unitbecomes bits per input vector.57Every code yields point in distortion/rate plane:(R, D).-6Average Distortion.5 1 1.5 2 2.5 3.0Average [email protected]@@@@@LLLLLLLLLLLLLLLvvvvvvvvvvvvvvvvvD(α, β) and R(α, W) measure costs, want tominimize both ⇔ Tradeoff58Interested in undominated points in D-R plane:For given rate (distortion) want smallest possibledistortion (rate) ⇔ optimal codesOptimization problem:• Given R, what is smallest possible D?• Given D, what is smallest possible R?• Lagrangian formulation: What is smallestpossible D + λR?Lossless codes correspond to (0,R) points.An extreme point of above general optimization.What is smallest R giving a lossless code?Another extreme point that will prove ofinterest: