Unformatted text preview:

EE482C: Advanced Computer Organization Homework #1Stream Processor ArchitectureStanford University Thursday, 18 April 2002Homework #1: Wavelet Compression on ImagineDue Date: Tuesday, 30 April 2002Checkpoints: Tuesday, 23 April 2002; Friday, 26 April 2002The purpose of this assignment is to give you a feel for stream programming. Toachieve this, you will program a simple application for Imagine using the Imagine toolchain. The application is a one-dimensional signal compression based on wavelets.Please work on this assignment in groups of up to four people, and hand in onewrite-up per group.I’ll give a very brief introduction to wavelets and wavelet compression, and then amore detailed description of the algorithm you will implement. If you are interested inwhat exactly is required in this assignment, go to Section 3 on page 6.1 What is Wavelet CompressionThis section provides a very brief description of compression using wavelets.1.1 What are WaveletsThe most basic definition of a wavelet is simply a function with a well defined temporalsupport that “wiggles” about the x−axis (it has exactly the same area above and belowthe axis).This definition however does not help us much, and a better approach is to explainwhat the wavelet transform and wavelet analysis are.The basic Wavelet Transform is similar to the well known Fourier Transform. Likethe Fourier Transform, the coefficients are calculated by an inner-product of the inputsignal with a set of orthonormal basis functions that span 1(this is a small subset of allavailable wavelet transforms though). The difference comes in the way these functionsare constructed, and more importantly in the types of analyses they allow.The key difference is that the Wavelet Transform is a multi-resolution transform, thatis, it allows a form of time–frequency analysis (or translation–scale in wavelet speak).When using the Fourier Transform the result is a very precise analysis of the frequenciescontained in the signal, but no information on when those frequencies occurred. In thewavelet transform we get information about when certain features occurred, and aboutthe scale characteristics of the signal. Scale is analogous to frequency, and is a measureof the amount of detail in the signal. Small scale generally means coarse details, andlarge scale means fine details (scale is a number related to the number of coefficients andis therefore counter-intuitive to the level of detail).The Discrete Wavelet Transform can be described as a series of filtering and sub-sampling (decimating in time) as depicted in Figure 1. In each level in this series, a set2 EE482C: Homework #1Figure 1: Discrete Wavelet Transform (from http://www.public.iastate.edu/∼rpolikar/WAVELETS/WTpart4.htmlof 2j−1coefficients are calculated, where j<Jis the scale and N =2Jis the numberof samples in the input signal. The coefficients are calculated by applying a high-passwavelet filter to the signal and down-sampling the result by a factor of 2. At the samelevel, a low-pass scale filtering is also performed (followed by down-sampling) to producethe signal for the next level. Both the wavelet and scale filters can be obtained from asingle Quadrature Mirror Filter (QMF) function that defines the wavelet.Each set of scale-coefficient corresponds to a “smoothing” of the signal and the re-moval of details, whereas the wavelet-coefficients correspond to the “differences” betweenthe scales. Wavelet theory shows that from the coarsest scale-coefficients and the seriesof the wavelet-coefficients the original signal can be reconstructed. The total number ofcoefficients (scale + wavelet) equals the number of samples in the signal.EE482C: Homework #1 31.2 Wavelet CompressionHow do we use these transform coefficients to perform compression? The distributionof values for the wavelet coefficients is usually centered around 0, with very few largecoefficients. This means that almost all the information is concentrated in a small fractionof the coefficients and can be efficiently compressed. This is done by quantizing thevalues based on the histogram and encoding the result in an efficient way, e.g. HuffmanEncoding. For this homework we will use a simpler method, and instead of quantizingwe will discard all but the M largest coefficients. This provides a compression ratio ofroughly 2M/N (the factor of 2 is for storing both the coefficient value and index).1.3 Where to Get More InformationA very good book on wavelets is “A Wavelet Tour of Signal Processing, 2ndedition” bySt´ephane Mallat.I also found several reasonable tutorials on the web such as:• cas.ensmp.fr/∼chaplais/Wavetour presentation/Wavetour presentation US.html• http://www.public.iastate.edu/∼rpolikar/WAVELETS/WTtutorial.html• http://perso.wanadoo.fr/polyvalens/clemens/wavelets/wavelets.html2 Algorithm DescriptionThe compression algorithm has two parts. The first is a wavelet transform that usesthe Fast Wavelet Transform taken from the Wavelab library developed at the StatisticsDepartment here at Stanford. This is a rich library of Matlab functions dealing withwavelets and wavelet analysis, and you are encouraged to check out their web page athttp://www-stat.stanford.edu/ wavelab/. After calculating the transform coefficients asort is applied, and all but the largest n coefficients are discarded. The signal can bereconstructed by performing an Inverse Wavelet Transform using the n stored coefficientsand zeroing out all other coefficients.I will now describe in more detail the simplest wavelet transform which is an orthogo-nal and periodic transform, and implemented by the FWTPO.m function. You can findthis function and the entire Wavelab distribution in/afs/ir.stanford.edu/class/ee482c/wavelab/Orthogonal.The pseudocode for the Fast Wavelet Transform algorithm appears below. The algo-rithm consists of a main loop that has two parts. This first part calculates the waveletcoefficients in the current scale (high-pass filter), and the second part calculates the scalecoefficients by low-pass filtering and essentially shifts the scale down (towards less de-tails).ThisloopisrepeatedO(log n) times, once for each scale. Notice that the amountof computation in each iterations shrinks by a factor of two for each scale lowered. As aresult the total computation cost is O(n), and the filter is applied to roughly 4n elements.4 EE482C: Homework #1Both the low-pass and high-pass filtering are


View Full Document

Stanford EE 482C - Study Guide

Download Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?