DOC PREVIEW
UCSD CSE 190 - Behavior Recognition

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CSE 190-A Behavior Recognition using CuboidsWithin a Learning FrameworkAnton EscobedoDepartment of Computer Science and EngineeringUniversity of California, San DiegoSan Diego, CA [email protected] SugihartoDepartment of Computer Science and EngineeringUniversity of California, San DiegoSan Diego, CA [email protected] Recognition is a challenge central to computer vision. With theincrease of video data that has been seen over the past years, methods toautomatically interpret this data and recognize behavior are of great im-portance. The Smart Vivarium provides an ideal environment for the de-velopment of automated behavior recognition. Although it doesn’t facesome of the issues commonly seen in outside situations, it does providea starting point from which to develop behavior recognition techniques.We propose an algorithm based on [3]. We modify this algorithm to in-corporate relative positioning for each cuboid in a manner similar to thatpresented in [1]. We also propose including displacement as an indica-tion of behavior. Finally, we will use SNoW[2] as a classifier to labeleach instance of behavior.1 IntroductionThis project takes place within the Smart Vivarium; an automated animal wellfare monitor-ing system. The main purpose of this project is to develop a framework that can accuratelyrecognize the behavior taking place within a clip of video. Our proposed method is largelybased on Dollar et al’s in that it also uses a sparse feature based representation to classifybehavior. Cuboids are extracted from video based on a response function which consistsof seperable linear filters. These cuboids are then clustered together by type and storedas prototypes. The prototypes are then used to create behavior descriptors based on his-tograms. Unfortunately, this approach discards all positional information, relying only onthe pressence of the given prototypes when recognizing behavior. We adopt an approachsimilar to that of Agarwal et al. in which relations over detected parts are defined in termsof the distance and direction between them. In addition to that, we consider translation asan important part of behavior, including it in our final feature vector. We use this informa-tion along with the types of features present and their relations to create a final behaviordescriptor. Once the behavior descriptors are obtained, we train a classifier using SNoW.We expect to see only a few cuboid types present in each behavior, thus providing a sparserepresentation. We take advantage of the ability to designate greater importance to certainfeatures when using the SNoW framework by adjusting the parameters to best representbehavior in terms of a combination of translation, cuboids present and relative positioningof cuboids.2 QualificationsAnton Escobedo:-Completed cs152, introduction to computer vision in Spring 06.-Worked on GroZi database during summer research program.-Currently enrolled in cs252c-Completed several project based courses including:cs111, cs112, and cs131a.Reinaldo Sugiharto:-Currently enrolled in cs167-Completed several project based courses including:cs111, cs131a-Currently performing literature review on behavior recognitionand matlab3 MilestonesNov 10- January 15Literature review on behavior recognition and multi-class classifiers such as SNoW, Ran-domized Trees [4].January 16- February 6Implement cuboids based on [3].February 7- February 14Add relational information to cuboids, decide on how to incorporate displacement infor-mation into the behavior descriptors.February 15- February 22Add displacement information, make final decision on learning framework to be used, mostlikely SNoW.February 23- February 28Use new behavior descriptors in a learning framework, begin final research paper.March 1- March 19Further testing, finish research paper. If time allows, test different data sets as well.4 Division of LaborFor the most part, we’ll follow a horizontal divison of labor, with both members workingon the same areas of the project together. Some tasks will be divided as follows:Reinaldo:-Implementing learning framework-Testing-Maintaining BlogAnton:-Implementing Cuboids-Writing Final Report5 Questions to be answered during the project1. How important is the positional relation of different features in behavior recogni-tion?2. How is displacement best represented when using feature representation?3. Is it possible to recognize more types of behavior without a great loss in robustnesswhen including displacement as part of a behavior’s descriptor?4. Which learning framework is most suitable for behavior recognition given the typeof descriptor used in this project?5. Can this algorithm be easily extended to other types of behavior and how welldoes it perform?6 Existing SoftwareWe will be using Piotr Dollar’s matlab toolbox for the implementation of cuboids and videomanipulation using matlab. Also, for the learning framework we will try to use availablealready existing frameworks if possible. One of those is SNoW. Other than that, we will bewriting most of the code using Matlab.7 Existing DataWe’ll be using data obtained from the Smart Vivarium to train and test our system. If timeallows for it, we’ll also use the facial expression dataset and Schuldt and Laptev’s HumanAction dataset from [5].7.1 AcknowledgementsWe would like to acknowledge Dr. Serge Belongie and the SO3 group for valuable input.References[1] S. Agarwal and A. Awan. Learning to Detect Objects in Images via a Sparse, Part-Based Representation. Learning, 26(11):1475–1490, 2004.[2] A.J. Carlson, C.M. Cumby, J.L. Rosen, and D. Roth. SNoW User Guide. University ofIllinois. http://l2r. cs. uiuc. edu/cogcomp, 1999.[3] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparsespatio-temporal features. Visual Surveillance and Performance Evaluation of Track-ing and Surveillance, 2005. 2nd Joint IEEE International Workshop on, pages 65–72,2005.[4] V. Lepetit, P. Lagger, and P. Fua. Randomized Trees for Real-Time Keypoint Recogni-tion. Conference on Computer Vision and Pattern Recognition.[5] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: a local SVM ap-proach. Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th InternationalConference on, 3,


View Full Document

UCSD CSE 190 - Behavior Recognition

Documents in this Course
Tripwire

Tripwire

18 pages

Lecture

Lecture

36 pages

Load more
Download Behavior Recognition
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Behavior Recognition and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Behavior Recognition 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?