DOC PREVIEW
UCSD CSE 190 - Face Recognition

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CSE190a Fall 06Face Recognition:Fisherfaces and lightingBiometricsCSE 190-aLecture 16CSE190a Fall 06Results of Hand Recognition!!64%Alex42%Warren (Nearest Neighbor)32%Warren (Bayesian)60%Taurin (Nearest Neighbor)64%Taurin (Statistical)86%Tom Nearest Neighbor82%Tom Bayesian92%Vikram (CS2) Nearest Neighbor42%Vikram (CS1)Why is Face Recognition Hard? CS252A, Winter 2005 Computer Vision IImage as a Feature Vector• Consider an n-pixel image to be a point in an n-dimensional space, x Rn.• Each pixel value is a coordinate of x.∈x1x2x3CS252A, Winter 2005 Computer Vision INearest Neighbor Classifier{ { RRjj} } are set of training images.x1x2x3RR11RR22II),(minarg IRdistIDjj=CS252A, Winter 2005 Computer Vision IComments• Sometimes called “Template Matching”• Variations on distance function (e.g. L1, robust distances)• Multiple templates per class- perhaps many training images per class.• Expensive to compute k distances, especially when each image is big (N dimensional).• May not generalize well to unseen examples of class.• Some solutions:– Bayesian classification– Dimensionality reduction2CS252A, Winter 2005 Computer Vision IAppearance-based (View-based)• Face Space:– A set of face images construct a face space in Rn– Appearance-based methods analyze the distributions of individual faces in face spaceCS252A, Winter 2005 Computer Vision IEigenfaces: linear projection•An n-pixel image x∈Rncan be projected to a low-dimensional feature space y∈Rmbyy = Wxwhere W is an n by m matrix.• Recognition is performed using nearest neighbor in Rm.• How do we choose a good W?CS252A, Winter 2005 Computer Vision IEigenfaces: Principal Component Analysis (PCA)Some details: Use Singular value decomposition, “trick” described in text to compute basis when n<<dCS252A, Winter 2005 Computer Vision IHow do you construct Eigenspace?[ ] [ ][ x1 x2 x3 x4x5 ]WConstruct data matrix by stacking vectorizedimages and then apply Singular Value Decomposition (SVD)CS252A, Winter 2005 Computer Vision ISingular Value DecompositionExcellent ref: ‘Matrix Computations,” Golub, Van Loan•Any m by n matrix A may be factored such thatA = UΣVT[m x n] = [m x m][m x n][n x n]• U: m by m, orthogonal matrix– Columns of U are the eigenvectors of AAT• V: n by n, orthogonal matrix, – columns are the eigenvectors of ATA• Σ: m by n, diagonal with non-negative entries (σ1, σ2, …, σs) with s=min(m,n) are called the called the singular values– Singular values are the square roots of eigenvalues of both AATand ATA– Result of SVD algorithm: σ1 ≥σ2 ≥ … ≥σsCS252A, Winter 2005 Computer Vision IEigenfaces• Modeling1. Given a collection of n labeled training images,2. Compute mean image and covariance matrix.3. Compute k Eigenvectors (note that these are images) of covariance matrix corresponding to k largest Eigenvalues.4. Project the training images to the k-dimensional Eigenspace.• Recognition1. Given a test image, project to Eigenspace.2. Perform classification to the projected training images.3CS252A, Winter 2005 Computer Vision IUnderlying assumptions• Background is not cluttered (or else only looking at interior of object• Lighting in test image is similar to that in training image.• No occlusion• Size of training image (window) same as window in test image. CS252A, Winter 2005 Computer Vision IFace detection using “distance to face space”• Scan a window ωacross the image, and classify the window as face/not face as follows:• Project window to subspace, and reconstruct as described earlier.•Compute distance between ω and reconstruction.•Local minima of distance over all image locations less than some treshold are taken as locations of faces. •Repeat at different scales.•Possibly normalize windows intensity so that |ω| = 1.CS252A, Winter 2005 Computer Vision IDifficulties with PCA• Projection may suppress important detail– smallest variance directions may not be unimportant• Method does not take discriminative task into account– typically, we wish to compute features that allow good discrimination– not the same as largest varianceCS252A, Winter 2005 Computer Vision ICS252A, Winter 2005 Computer Vision IIllumination Variability“The variations between the images of the same face due to illumination and viewing direction are almost always larger than image variations due to change in face identity.”-- Moses, Adini, Ullman, ECCV ‘94CS252A, Winter 2005 Computer Vision IFisherfaces: Class specific linear projectionP. Belhumeur, J. Hespanha, D. Kriegman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection, PAMI, July 1997, pp. 711--720.•An n-pixel image x∈Rncan be projected to a low-dimensional feature space y∈Rmbyy = Wxwhere W is an n by m matrix.• Recognition is performed using nearest neighbor in Rm.• How do we choose a good W?4CS252A, Winter 2005 Computer Vision IPCA & Fisher’s Linear Discriminant • Between-class scatter• Within-class scatter• Total scatter•Where– c is the number of classes– μiis the mean of class χi–| χi| is number of samples of χi..TiiciiBS ))((1μμμμχ−−=∑=∑∑=∈−−=cixTikikWikxS1))((χμμμWBcixTkkTSSxSik+=−−=∑∑=∈1))((χμμμμ1μ2μχ1χ2CS252A, Winter 2005 Computer Vision IPCA & Fisher’s Linear Discriminant • PCA (Eigenfaces) Maximizes projected total scatter• Fisher’s Linear Discriminant Maximizes ratio of projected between-class to projected within-class scatterWSWWTTWPCAmaxarg=WSWWSWWWTBTWfldmaxarg=χ1χ2PCAFLDCS252A, Winter 2005 Computer Vision IComputing the Fisher Projection Matrix• The wiare orthonormal• There are at most c-1 non-zero generalized Eigenvalues, so m <= c-1• Can be computed with eig in MatlabCS252A, Winter 2005 Computer Vision IFisherfacesWWSWWWWSWWWWSWWWWWPCAWTPCATPCABTPCATWfldTTWPCAPCAfldmaxargmaxarg===• Since SWis rank N-c, project training set to subspace spanned by first N-c principal components of the training set.• Apply FLD to N-cdimensional subspace yielding c-1 dimensional feature space.• Fisher’s Linear Discriminant projects away the within-class variation (lighting, expressions) found in training set.• Fisher’s Linear Discriminant preserves the separability of the classes.CS252A, Winter 2005 Computer Vision IPCA vs. FLDCS252A, Winter 2005 Computer Vision IExperimental Results - 1 Variation in Facial Expression, Eyewear, and


View Full Document

UCSD CSE 190 - Face Recognition

Documents in this Course
Tripwire

Tripwire

18 pages

Lecture

Lecture

36 pages

Load more
Download Face Recognition
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Face Recognition and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Face Recognition 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?