DOC PREVIEW
UCSD CSE 190 - Person Tracking

This preview shows page 1-2-3-4-5-6 out of 19 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Person Tracking for Parking Space VacancyPredictionMooneer SalemDepartment of Computer Science and EngineeringUniversity of California, San DiegoLa Jolla, CA [email protected] WatsonDepartment of Computer Science and EngineeringUniversity of California, San DiegoLa Jolla, CA [email protected] a parking space in a busy parking lot is complicated. The traditional ap-proach involves driving through the entire lot, examining each space to determineits usability. This approach has some shortcomings that make it unusable in largeor extremely busy lots. In this paper, we examine an approach to automating thedetection of parking spaces that are about to be vacant–in other words, the detec-tion of cars that are about to leave the space they are in. We examine the accuracyof tracking moving people and vehicles and how this information could be used todisplay vacancy information at the parking lot entrance.1 Introduction1.1 MotivationThe task of finding a parking space in an urban environment is complicated. Compared to rural orsuburban environments, the number of spaces and their location is limited. In a busy location, peoplewaiting to park can possibly wait several minutes before finding any type of space–much longer ifthey want a closer space. The act of finding a space involves driving around the entire parking lotmultiple times, examinining each space to determine its vacancy. This approach is not optimal, forseveral reasons:Use of non-renewable resources The act of examining all the spaces in a parking lot by car usesvaluable gasoline and diesel fuel. This fuel could be used for other purposes, such asdelivering goods across the country. Unnecessary burning of fossil fuels increases theoperating cost of the vehicle.Inefficient use of time Every minute spent looking for a space is a minute that could be used run-ning other errands or spending time with family.Traffic considerations Cars that are needlessly driving around a parking lot increases the trafficinside the lot. This reduces the rate at which cars leave the lot, compounding the problem.From the above, it is obvious that a better approach is needed. We propose a computer visionsolution to the problem of parking space vacancy detection, in which video taken from a camera(or other source) is analyzed and appropriate actions taken depending on the movement of objectswithin the camera’s field of vision. The process could be extended to broadcast a listing of parkingspaces that are about to become vacant to a conspicious location in the parking lot.1.2 Previous WorkThere is at least one project that has already attempted to tackle this problem. UbiPark[2], designedby students at UCSD, is designed to detect vacant parking spaces versus occupied spaces. It reliesonly on the presence or lack thereof of a car in a particular space.Zhao and Nevatia tackled the human tracking problem in [4]. In particular, they worked on trackingmultiple humans at once, something that this system would have to implement to be useful in busysituations. There are other approaches to human motion capture and analysis at [3].Finally, the general problem of object detection has been discussed in Bergboer, Postma and vanden Herik’s paper[1]. Their approach relies on still images, but in theory, such an approach could beextended to video as well.2 The Person/Car Tracking AlgorithmThe algorithm is split up into several sections.2.1 Image DifferencingIn order for the other algorithms used by the system to function, we need to be able to determinewhich portions of the frame have changed, relative to a certain frame or combination thereof. Inour system, the components of each pixel of the current frame are subtracted from the compo-nents of each pixel in the previous frame. This produces the amount of difference in each com-ponent. Then, to visualize the magnitude of change across all components, the Euclidian distance,p(x1− y1)2+ (x2− y2)2+ (x3− y3)2(where x corresponds to the current frame and y corre-sponds to the previous frame) is used. This distance is filled in for all three components of a resultimage, producing a grayscale RGB image.This leaves the question of what each component should contain. In typical graphics applications,RGB is used. RGB places the magnitude of red, green and blue in each pixel. This also implicitly en-codes information other than the color of each pixel, such as luminiscence. In contrast, L*a*b* sep-arates color data from luminiscence data, allowing an application designer to determine the amountof movement in an image simply from the degree of the color change. As a result, the L*a*b* colorspace is used in this project.Once the grayscale image is produced from the image differencing process, extraneous data–verysmall changes of a pixel or few in area–are removed by smoothing the image using a Gaussian blur.The kernel used for this purpose is a 7x7 matrix. This effectively eliminates very small changesfrom the image, while leaving larger areas usable in later algorithms.2.2 Blob DetectionBlob detection (also known as connected component labeling) is the next step in the pipeline. Thistakes the grayscale image produced in the previous section and attempts to find the bright areas. Weuse a library in OpenCV[5] for this purpose, called cvBlobsLib. Once blobs are detected, the librarydetermines their size and shape and encodes this information in a structure for later use.To account for intermittent differences that result in many smaller blobs, we merge blobs in thesame frame that are close together into one larger blob. Merging blobs requires a pass over all theblobs in the frame to determine which blobs are close together. This calculation must be done onthe order of n2times, where n = nu mberofblobs. Because of this, we replaced euclidian distancecalculation with box calculation. The center of each box has a box drawn around it. If any of theseboxes overlap, the structures of each blob are merged and both blobs are considered as a single blob.This process is repeated until every blob has been checked against every other blob.2.3 Object TypingIn order to take appropriate action, it is necessary to know what type of object a particular blobrepresents in an image. From our own perception of the world around us, we realized that carstend to have a low height to width ratio, while humans have a larger height to width ratio. A 1:1height/width ratio is a square.To make this decision as


View Full Document

UCSD CSE 190 - Person Tracking

Documents in this Course
Tripwire

Tripwire

18 pages

Lecture

Lecture

36 pages

Load more
Download Person Tracking
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Person Tracking and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Person Tracking 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?