DOC PREVIEW
UW-Madison ECE 734 - Introduction

This preview shows page 1-2-23-24 out of 24 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

DilationErosionReferences1. IntroductionMain purpose of video segmentation is to enable content-based representation byextracting objects of interest from a series of consecutive video frames. Thistechnique can be used in MPEG-4 standard. It is also a key to many robotic visionapplications. Most vision based autonomous vehicles acquire information on theirsurroundings by analyzing video. Particularly, it is required for high-level imageunderstanding and scene interpretation such as spotting and tracking of special eventsin surveillance video. For instance, pedestrian and highway traffic can be regularizedusing density evaluations obtained by segmenting people and vehicles. By objectsegmentation, speeding and suspicious moving cars, road obstacles, strange activitiescan be detected. Forbidden zones, parking lots, elevators can be monitoredautomatically. Gesture recognition as well as visual biometric extraction can be donefor user interfaces. We developed a novel algorithm for automatic and reliablesegmentation of moving objects in video sequences. What we do is to implement anefficient video object segmentation algorithm with Matlab language and search forhardware implementation using systolic array and pipelined architecture.2. Algorithm1Our algorithm is based on inter-frame change detection. Change detection for inter-frame difference is one of most feasible solutions because it enables automaticdetection of objects and allows detection of nonrigid motion. However the drawbackof change detection is noise by decision error which causes the lack of spatial edgeinformation, especially in some crucial image areas and forms small false region. Toovercome this drawback, we use the method of morphology to remove the small holesregion and smooth the edge of object.2.1 Extraction of Moving Edge (ME) MapOur algorithm starts with edge detection. While edge information plays a key role inextracting the physical change of the corresponding surface in a real scene, exploitingsimple difference of edges for extracting shape information of moving objects invideo sequence suffers from great deal of noise even in stationary background. This isdue to the fact that the random noise created in one frame is different from the onecreated in the successive frame. The difference of edges is defined as)()()()(11 nnnnIGIGII(1)where In and In-1 are the current frame and previous frame respectively. )(I is edgemap which is obtained by the Canny edge detector[2], which is accomplished byperforming a gradient operationon the Gaussian convoluted image IG . This2algorithm of edge extraction from the difference images in successive frames resultsin a noise-robust difference edge map DEn because Gaussian convolution included inthe Canny operator suppresses the noise in the luminance difference.11)(nnnnnIIGIIDE (2)After calculating the difference edge map of images using a Canny edge detector, weextract the moving edge MEn of the current frame In based on the difference edge mapDEn of difference 1nnII, the current frame’s edge map )(nnIE , and thebackground edge map Eb. Eb contains absolute background edges in case of a stillcamera, which can be extracted from the first or by counting the number of edgeoccurrence for each pixel through the first several frames. We define the current edgemodel  kneeE ,...,1 as a set of all edge points detected by Canny detector. Besides,we also define the moving edge  lnmmME ,...,1 as a set of l moving edge points,where kl  and nnEME . MEn can be generated by selecting edge pixels from Enwithin a small distance Tchange of DEn, that is changeDExnchangenTxeEeMEnmin| (3)Some MEn might have scattered noise which needs to be romoved. In addition, aprevious frame’s moving edges can be referenced to detect temporarily still movingedge, that is stillMExbnstillnTxeEeEeMEn1min,| (4)The final moving edge map for current In is expressed by combining the two maps3stillnchangennMEMEME  (5)2.2 Extraction of objectWhile a moving edge map MEn, as shown in figure 1, detected from DEn, the object isready to be extracted. The horizontal object candidates are declared to be the regionbetween the first and last edges in each row. The vertical object candidates are basedon the same method. After finding both horizontal and vertical object candidates, wecombine these two object candidates as shown in figure 2 and then use morphologymethod to remove small false region and smooth edge of object. Here we use erosionand dilation operations as shown in figure 3.In the following sections, we’ll talk about details of some arithmetic which weuse in this project.Figure 1. Edge map of current frame4Figure 2 Combination of horizontal and vertical candidatesFigure 3 After applying Morphology5Figure 4 Block diagram of the segmentationFigure 5 Our result2.3 Gradient operatorThe gradient of an image f(x,y) at location (x,y) is the vectoryfxfffyxF (6)6It is well know from vector analysis that the gradient vector points in the direction ofmaximum rate of change of f at (x,y). In edge detection, an important quantity is themagnitude of this vector, generally referred to simply as the gradient and denotedf, where:2/122][)(magyxfff  f (7)This quantity equals the maximum rate of increase of f(x,y) per unit distance in thedirection of f. Common practice is to approximate the gradient with absolutevalues:yxfff  (8)which is much simpler to implement, particularly with dedicated hardware.Note from Eq. (6) and Eq. (7) that computation of gradient of an image is based onobtaining the partial derivative xf  and yf  at every pixel location. Thereare several ways to derive the first-order differentiation to implement derivative indigital form[3]. Among these methods, we choose Sobel operator to be implementedin our project. The Sobel operator masks are)2()2(321987zzzzzzfx (9)and)2()2(741963zzzzzzfy (10)7where zi are referred to pixels in a 3×3 window as shown below.So we can get two 3×3 masks as shown above.2.4 MorphologyIn our project, we use two


View Full Document

UW-Madison ECE 734 - Introduction

Documents in this Course
Load more
Download Introduction
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Introduction and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Introduction 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?