DOC PREVIEW
UCSB ECE 178 - IMAGE COMPRESSION- I

This preview shows page 1-2-3-4-5 out of 15 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Image Compression-I 1IMAGE COMPRESSION- IWeek VIIIImage Compression-I 2Reading.. Chapter 8– Section 8.1 (excluding 8.1.7)– 8.2.1Huffman, 8.2.4 LZW, 8.2.5 run-length– 8.2.8 Block transform coding & JPEG– 8.2.9 Predictive coding--spatial & temporal,,lossless and lossy– 8.2.10 (time permitting) wavelet compressionImage Compression-I 3Image compressionObjective: To reduce the amount of data required torepresent an image.Important in data storage and transmission• Progressive transmission of images (internet, www)• Video coding (HDTV, Teleconferencing, digitalcinema)• Digital libraries and image databases•Medical imaging•Satellite imagesImage Compression-I 4IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform codingImage Compression-I 5Lossy vs Lossless CompressionCompression techniquesInformation preservingLossy(loss-less)Images can be compressedand restored without any lossof information.Application: Medical images,GISPerfect recovery is notpossible but provides alarge data compression.Example : TV signals,teleconferencingImage Compression-I 6Data Redundancy• CODING: Fewer bits to represent frequent symbols.• INTERPIXEL / INTERFRAME: Neighboring pixelshave similar values.• PSYCHOVISUAL: Human visual system can notsimultaneously distinguish all colors.Image Compression-I 7Coding RedundancyFewer number of bits to represent frequently occurringsymbols.Let pr(rk) = nk / n, k = 0,1,2, . ., L-1; L # of gray levels.Let rk be represented by l (rk ) bits. Therefore average# of bits required to represent each pixel isL l r p r AavgkkLrk= !="#( ) ( ) ( )01Usually l(rk) = m bits (constant).! = ="L m p r mavg rkk( )Image Compression-I 8L l r p r AavgkkLrk= !="#( ) ( ) ( )01Coding Redundancy (contd.) Consider equation (A): It makes sense toassign fewer bits to those rk for which pr(rk)are large in order to reduce the sum. this achieves data compression and resultsin a variable length code. More probable gray levels will have fewer #of bits.Image Compression-I 9Coding: Example Example (From text)rkpr(rk) Code l(rk)r0= 0 0.19 11 2r1=170.25 01 2r2=270.21 10 2r3=370.16 001 3r4=470.08 0001 4r5=570.06 00001 5r6=670.03 000001 6r7= 1 0.02 000000 6Lr l ravgk k==!"( ) ( ).2 7 Bits10% less codeImage Compression-I 10Inter-pixel/Inter-framespatialInterframef(x,y)f(x,y,t3)f(x,y,t2)f(x,y,t1)f(x,y,t0) .N(x,y)Depends onf (x’, y’) , (x’, y’) ε Nxy Nxy : Neighbourhoodof pixels around (x,y)f(x, y, ti ) i=1, 2, 3, . . . are related to each other.This can be exploited forvideo compression.Image Compression-I 11Psychovisual Human visual system has limitations ;good example is quantization. conveysinfromation but requires much lessmemory/space. (Example: Figure 8.4 in text; matlab)Image Compression-I 12QuantizationImage Compression-I 13IGS CodeImage Compression-I 14General ModelGeneral compression modelChannelChannelencoderSourceencoderChanneldecoderSourcedecoderSource encoderf(x,y)Mapper QuantizerSymbol encoderf(x,y) g(x,y)To channelImage Compression-I 15Source EncoderMapper: Designed to reduce interpixel redundancy.example: Run length encoding results in compression. Transfrom to another domain where the coefficients areless correlated then the original. Ex: Fourier transfrom.Quantizer: reduces psychovisual redundancies in the image- should be left out if error-free encoding is desired.Symbol encoder: creates a fixed/variable length code -reduces coding redundancies.Image Compression-I 16Source DecoderfromchannelsymboldecoderInversemapperf (x,y)Note: Quantization is NOT reversibleQuestion(s):• Is there a minimum amount of data that is sufficient tocompletely describe an image without any loss ofinformation?• How do you measure information?^Image Compression-I 17Self-Information Suppose an event E occurs with probability P(E) ;then it is said to contain I (E) = -log P(E) units ofinformation. P(e) = 1 [always happens] => I(e) = 0 [conveysno information] If the base of the logarithm is 2, then the unit ofinformation is called a “bit”. If P(E) = 1/2 , I(E) = -log2(1/2) = 1 bit. Example:Flipping of a coin ; outcome of this experimentrequires one bit to convey the information.Image Compression-I 18Self-Information (contd.)Assume an information source which generates thesymbols {a0, a1, a2, . . . , a L-1} withprob a p a p aI a p ai i iiLi i{ } ( ) ; ( )( ) log ( )= == !=!"1012bits.Image Compression-I 19ENTROPYAverage information per source output isH is called the uncertainity or the entropy of the source.If all the source symbols are equally probable then the sourcehas a maximum entropy.H gives the lower bound on the number of bits required tocode a signal.H p a p ai iiL= !=!"( ) log ( )201bits / symbolImage Compression-I 20Noiseless coding theorem(Shannon)It is possible to code, without any loss ofinformation, a source signal with entropy Hbits/symbol, using H + ε bits/symbol where ε is anarbitrary small quantity.ε can be made arbitrarily small by consideringincreasingly larger blocks of symbols to becoded.Image Compression-I 21Error-free codingCoding redundancy Interpixel redundancyEx: Huffman codingEx: Runlength codingYeilds smallest possible # of codesymbols per source symbol whensymbols are coded one at a time. Error-free codingImage Compression-I 22Huffman code: exampleHuffman code: Consider a 6 symbol source a1a2a3a4a5a6p(ai) 0.1 0.4 0.06 0.1 0.04 0.3Image Compression-I 23Huffman coding: example(contd.)a2 0.4(1) 0.4(1) 0.4(1) 0.4(1) 0.6(0)a6 0.3(00) 0.3(00) 0.3(00) 0.3(00) 0.4(1)a1 0.1(011) 0.1(011) 0.2(010) 0.3(01)a4 0.1(0100) 0.1(0100) 0.1(011)a3 0.06(01010) 0.1(0101)a5 0.04(01011)Image Compression-I 24Example (contd.)Average length:(0.4) (1) + 0.3 (2) + 0.1 X 3 + 0.1 X 4 + (0.06 +0.04) 5 = 2.2 bits/symbol-Σ pi log pi = 2.14 bits/symbol (Entropy)Image Compression-I 25Huffman code: Steps Arrange symbol probabilities pi in decreasingorder While there is more than one node– Merge the two nodes with the smallestprobabilities to form a new node with probabilitiesequal to their sum.– Arbitrarily assign 1 and 0 to each pair of branches– merging in to a node. Read sequentially from the root node to theleaf node where the symbol is located.Image Compression-I 26Huffman code (final slide)


View Full Document

UCSB ECE 178 - IMAGE COMPRESSION- I

Download IMAGE COMPRESSION- I
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view IMAGE COMPRESSION- I and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view IMAGE COMPRESSION- I 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?