DOC PREVIEW
UT Arlington EE 5359 - Project Report

This preview shows page 1-2-15-16-17-32-33 out of 33 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

EE 5359Project Report(Spring 2008)Comparative study of Motion Estimation (ME)AlgorithmsKhyati Mistry10005527961ABSTRACTMotion estimation refers to the process of determining motionvectors representing the transition/motion of objects in successivevideo frames. Motion estimation finds its applications in two mainareas: reduction of temporal redundancy in video coders andrepresentation of true motion of objects in real-time videoapplications. This project focuses on the comparative study of the differentmotion estimation techniques and search algorithms that havebeen proposed for interframe coding in moving video sequencestaking into consideration the different kinds of motion of themoving objects in successive frames –translational, rotational,zooming etc. This is very important because it helps in bandwidthconservation by reducing temporal redundancy and reduction inpower consumption by reducing computational complexity.A broad classification of motion estimation algorithms is presentedand different time domain and frequency domain algorithms arestudied and compared. Some of the block matching time domainalgorithms like full search, three step search, new three stepsearch, four step search, diamond search are implemented andtheir performance is compared based on their computationalcomplexity and error function.21. IntroductionMotion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. Fig. 1 shows the block diagram for motion estimation [1]- Fig. 1: Motion estimation block diagram [1]The motion estimation module will create a model for the current frame bymodifying the reference frames such that it is a very close match to the currentframe. This estimated current frame is then motion compensated and thecompensated residual image is then encoded and transmitted.The objective is to estimate the motion vectors from two time sequential framesof the video. The motion of an object in the 3-d object space is translated into twosuccessive frames in the image space at time instants t1 and t2 as shown in Fig.2. Translational and rotational motion of objects can be defined in temporalframes using this model.3Fig. 2: Basicgeometry [2]wheret1, t2 = representthe time axissuch that t2 > t1(X, Y) = Image-space coordinates of P in the scene at time t1(X’, Y’) = Image space coordinates of P at time t2 > t1(x, y, z) = Object-space coordinates at a point P in the scene at time t1The output of the motion-estimation algorithm comprises the motion vector foreach block, and the pixel value differences between the blocks in the currentframe and the “matched” blocks in the reference frame.1.1 Broad classification of motion estimation algorithms:4Fig. 3: Motion estimation algorithms [10]Frequency domain algorithms: In these algorithms, the algorithm is applied on the transformed coefficients. The different algorithms/techniques used are Phase correlation, matching in wavelet and DCT domains[17]. However due to the computational complexity involved in these algorithms, time domain block matching algorithms are preferred.Time domain algorithms: These comprise of matching algorithms and gradient-based algorithms[1]-[16]. Block matching algorithms: match all or some pixels in the current block with the block in the search area of reference frame based on some cost function[1]-[15].Feature based algorithms: match the meta information/data of the current block with that of the block in reference frame[16].1.1.1 Frequency domain motion estimation algorithmsIn the frequency domain, the phase correlation algorithm provides accuratepredictions but is based on the fast Fourier transform (FFT), which is5incompatible with current DCT-based video coding standards and which has highcomputational complexity since a large search window is necessary.In implementation, the current frame is divided into 16x16 blocks and phasecorrelation calculation is performed for each block. In order to correctly estimatethe cross correlation of the corresponding blocks in respective frames, blocks areextended to 32x32 in size, centered around the formerly defined 16x16 blocks, tocalculate phase correlation. If only 16x16 blocks are considered, their correlationmight be very low for particular motion due to the small overlapping area, asshown in Fig. 4(a). Once the block size is extended to 32x32, the overlappingarea is increased for better correlation estimation, as is shown in Fig. 4(b). The highest peak in the correlation map usually corresponds to the best matchbetween a large area, while not necessarily the best match for 16x16 objectblock. If there are several moving objects in the block with different displacement,there can be several peaks appearing in the correlation map, as shown in Fig. 5,where there are two peaks. In this case several candidates are selected insteadof just one highest peak, and then the peak which best represents thedisplacement vector for the object block is selected. Once the candidates areselected, they are examined one by one using image correlation. For eachcandidate, the motion vector is already found, hence the 16x16 object block canbe placed in the 32x32 window of the previous frame to measure the extent ofcorrelation. The candidate resulting in the highest image correlation is chosenand its displacement is the right motion vector for the object block. Fig. 4: Correlation area using (a) 16x16 and (b) 32x32 blocks [21]6Fig. 5: Phase Correlation between Two Blocks [21]DCT-based motion estimation simplifies the conventional DCT-based videocoders achieving spatial redundancy reduction through DCT and temporalredundancy reduction through motion estimation and compensation. In theconventional DCT-based video coder, the feedback loop has the followingfunctional blocks: DCT, inverse DCT (IDCT), quantizer, inverse quantizer, andspatial domain motion estimation and compensation (Fig. 6a). However, if motionestimation and compensation are performed entirely in the DCT domain, IDCTcan be removed from the


View Full Document

UT Arlington EE 5359 - Project Report

Documents in this Course
JPEG 2000

JPEG 2000

27 pages

MPEG-II

MPEG-II

45 pages

MATLAB

MATLAB

22 pages

AVS China

AVS China

22 pages

Load more
Download Project Report
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Project Report and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Project Report 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?