DOC PREVIEW
MSU CSE 802 - Recognizing Faces

This preview shows page 1-2-3-4-5-6 out of 18 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Recognizing Faces Dirk Colbry Colbrydi cse msu edu Outline Introduction and Motivation Defining a feature vector Principal Component Analysis Linear Discriminate Analysis 1 http www infotech oulu fi Annual 2004 2 3 Faces with intra subject variations in pose illumination expression accessories color occlusions and brightness Different persons may have very similar appearance 4 Humans can recognize caricatures and cartoons How can we learn salient facial features Discriminative vs descriptive approaches Face Modeling Challenges L Lighting I 2 D OG P L OL E faceShape P Pose I 3 D OL E faceShape E Expression OG Global Occlusion hands walls OL Local Occlusions Hair makeup 5 0 http www peterme com archives 2004 05 html 0 11 0 2 http www peterme com archives 2004 05 html 0 www 3g co uk PR March2005 1109 htm www customs gov au site page cfm u 4419 3 3 4 9 4 3 4 5 678 4 2 6 3 Using Cognitec engine pictures of dead subject were compared with photographs of Dillinger The best matching score with similar pose was 0 29 A reference group a subset of FERET was used to construct genuine and imposter matching score distributions Dead subject Living subject 0 29 Matching score 0 29 0 29 0 38 0 22 9 4 6 7 7 87 2 5 7 Left right and up down show identification rates for the non frontal images Left right morphed and up down morphed show identification rates for the morphed non frontal images Performance is on a database of 87 individuals Algorithms Pose dependency Pose dependent Pose invariant Object centered Models Viewer centered Images Face representation Matching features Appearance based Holistic PCA LDA Hybrid LFA Featurebased Analytic Elastic Bunch Graph Matching Active Appearance Models Morphable Model 8 Analytic Approach x d 1 d 2 d 3 d 12 Analytic Example Can be difficult to obtain reliable and repeatable anchor points 9 Holistic Approach 95 100 104 110 113 115 116 119 121 r p 1 1 p 1 2 p 1 3 p 1 4 p 1 5 p 1 6 p 1 7 p 1 c c One long vector of size d r c x p 1 1 p 1 2 p 1 3 p 1 c p 2 1 p 2 2 p r c dx1 Dealing With Large Dimensional Spaces Curse of Dimensionality Every point in the d dimensional space is a picture The majority of the points are just noise Goal is to identify the region of this space with points that are faces 10 Principal Components 2D Example V2 V1 Principal Component Analysis a k a Karhunen Loeve Projection Given n training images x1 x2 x3 xn Where each xi is a d dimensional column vector Define the image mean as 1 n i xi n i 1 Shift all images to the mean ui xi i U u1 u 2 u3 un dxn Define d d Covariance Matrix on set X as C x UU T 11 Computation of Principal Component Axes Goal Calculate eigenvectors Vi such that UU T Vi iVi The scatter matrix could be quite large How big is a scatter matrix with 240 rows 320 cols Solving for a large scatter matrix is difficult We need to use a linear algebra method to solve a much smaller matrix The following matrix is much smaller than the scatter matrix as long as n d U TU Efficient Eigenvector Calculation Calculate the eigenvectors Wi of this new matrix U T UWi iWi It can be shown that U T UUWi iUWi Then let Vi be a unit vector UWi Vi i 1 2 3 UWi Vi Vi n Solving for Wi will give you Vi which is what we want 12 Feature Space Reduction Order the eigenvectors based on the magnitude of the eigenvalues 1 1 1 n n Find the smallest b such that i i b 1 n j 1 j Ideally b d where is the ratio of information loss ex 0 5 The b vectors are used to define the new PCA subspace V1 V2 V3 Vb Visualizing the Principal Component Axes as an Image x V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V15 b 15 http vismod media mit edu vismod demos facerec basic html 13 Converting Images to the New PCA Subspace Reorder Subspace Vectors in to a matrix M V1V2V3 Vb dxb Project original image into the new PCA sub space yi M T U i M T xi Y M TU M T X Matrix Notation Reconstruct an approximate original image from a subspace Vector xi Myi xi Reconstructed Images From M and x Calculate the new image vector xnew Convert image into new subspace ynew ynew M T xnew Remember the size of the y vector is less than the size of the x vector Reconstruct the original image xnew Mynew xnew x new Images from http vismod media mit edu vismod demos facerec basic html 14 Problems with PCA 2D Example 0 4 0 3 V1 V2 0 2 0 1 0 0 1 0 2 0 3 0 4 0 3 0 2 0 1 0 0 1 0 2 0 3 0 4 0 5 0 6 Linear Discriminate Analysis Uses the class information to choose the best subspace Training examples have class labels Samples of the same class are close together Samples of different classes are far apart Start by calculating the mean and scatter matrix for each class 15 Defining LDA Space Compute the mean and covariance for the following All points x Cx Note x from PCA notation i K Classes each with i Ci Let pi be the apriori probabilities of each class Typically pi 1 K LDA Goal is to find a W matrix that will project the points X into a new more useful subspace Z Z WT X The Transformation W is defined as S Between class scatter max max between W W Within class scatter S within Where k S within k i 1 pi Ci S between i 1 pi i x i x T Sbetween can be thought of as the covariance of data set whose members are the mean of each class 16 Solving for W Goal Maximize the following max W S Between class scatter max between W Within class scatter S within The optimal projection W which maximizes the above formula can be obtained by solving the generalized eigenvalue problem SbetweenW S withinW Review of Linear Transformations Principal Component Analysis PCA Calculates a transformation from a d dimensional space into a new d dimensional space The axes in the new space can be ordered from the most informative to the least informative axis Smaller feature vectors can be obtained by only using the most informative of these axes however some information will be lost Linear Discriminate Analysis LDA Uses the class information to choose the best subspace which maximizes the between class variation while minimizing the within class variation Note both PCA and LDA are general mathematical methods and may be useful on any large dimensional feature space not just …


View Full Document

MSU CSE 802 - Recognizing Faces

Download Recognizing Faces
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Recognizing Faces and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Recognizing Faces and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?