DOC PREVIEW
STEVENS CS 559 - Machine Learning Fundamentals and Applications 9th Set of Notes

This preview shows page 1-2-3-4-30-31-32-33-34-62-63-64-65 out of 65 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 65 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 559: Machine LearningCS 559: Machine Learning Fundamentals and Applications9thSet of NotesInstructor: Philippos MordohaiWebpage: www.cs.stevens.edu/~mordohaiEmail:Philippos Mordohai@stevens eduE-mail: [email protected]: Lieb 215OverviewOverview•Eignefaces(notes bySrinivasaEignefaces(notes by SrinivasaNarasimhan, CMU)•Fisher Linear Discriminant (DHS Chapter3Fisher Linear Discriminant (DHS Chapter 3 and notes based on course by Olga Veksler, Univ. of Western Ontario))• Linear Discriminant Functions (DHS, Chapter 5 and notes based on Olga pgVeksler’s)– Introduction2EigenfacesEigenfaces• Face detection and person identification pusing PCA• Real timeI i i i ll h•Insensitivity to small changes• Simplicity• Limitations–Only frontal faces–one pose per classifier–Only frontal faces –one pose per classifier– No invariance to scaling, rotation or translationS. Narasimhan 3SpaceofAllFacesSpace of All Faces+=• An image is a point in a high dimensional space– An N x M image is a point in RNMW d fi t i thi did i th 2DS. Narasimhan 4–We can define vectors in this space as we did in the 2D caseKey IdeaKey Idea}ˆ{PRLx•Images in the possible setarehighlycorrelated}{RLxImages in the possible set are highly correlated• So, compress them to a low-dimensional subspace thattk htitifthilDOFcaptures key appearance characteristics of the visual DOFs• EIGENFACES [Turk and Pentland]: USE PCAS. Narasimhan 5EigenfacesEigenfacesEi fl k h t lik ifS. Narasimhan 6Eigenfaceslook somewhat like generic facesLinear Subspacesconvert x into v1, v2coordinatesWhat does the v2coordinate measure?- distance to line- use it for classification—near 0 for orange ptsWhat does the v1coordinate measure?- position along line- use it to specify which orange point it is• Classification can be expensive– Must either search (e.g., nearest neighbors) or store large probability density functions.• Suppose the data points are arranged as aboveIdeafit a line classifier measures distance to line–Idea—fit a line, classifier measures distance to lineS. Narasimhan 7Dimensionality Reduction• Dimensionality reductiony– We can represent the orange points with only their v1coordinates• since v2coordinates are all essentially 0This makes it much cheaper to store and compare points–This makes it much cheaper to store and compare points– A bigger deal for higher dimensional problemsS. Narasimhan 8Linear SubspacesConsider the variation along direction vamong all of the orange points:What unit vector v minimizes var?What unit vector v maximizes var?Solution: v1is eigenvector of A with largest eigenvaluev2is eigenvector of A with smallest eigenvalue9Higher Dimensions• Suppose each data point is N-dimensional– Same procedure applies:– The eigenvectors of A define a new coordinate system• eigenvector with largest eigenvalue captures the most variation among training vectors x• eigenvector with smallest eigenvalue has least variation– We can compress the data by only using the top few eigenvectors•corresponds to choosing a“linear subspace”corresponds to choosing a linear subspace– represent points on a line, plane, or “hyper-plane”• these eigenvectors are known as the principal componentsS. Narasimhan 10Problem: Size of Covariance Matrix A• Suppose each data point is N-dimensional (N pixels)– The size of covariance matrix A is N2– The number of eigenfaces is N– Example: For N = 256 x 256 pixels, Size of A will be 65536 x 65536 !Size of A will be 65536 x 65536 !Number of eigenvectors will be 65536 !Till l2030 i t ffi S thiTypically, only 20-30 eigenvectors suffice. So, thismethod is very inefficient!S. Narasimhan 11Efficient Computation of EigenvectorsIf B is MxN and M<<N then A=BTB is NxN >> MxM– M  number of images, N  number of pixels– use BBTinstead, eigenvector of BBTis easilyconverted to that of BTB (BBT)y=ey(BB) y e y=> BT(BBT) y = e (BTy)=> (BTB)(BTy) = e (BTy)()(y)(y)=> BTy is the eigenvector of BTBS. Narasimhan 12Eigenfaces – summary in words• Eigenfaces are the eigenvectors ofthe eigenvectors ofthe covariance matrix ofthe probability distribution ofthe vector space ofhuman faces• Eigenfaces are the ‘standardized face ingredients’ derived from the statistical analysis of many pictures of human faces• A human face may be considered to be a combination of thesestandardized facesthese standardized faces S. Narasimhan 13Generating Eigenfaces – in words1. Large set of images of human faces is takengg2. The images are normalized to line up the eyes, mouths and other features 3. The eigenvectors of the covariance matrix of the face image vectors are then extracted4These eigenvectors are calledeigenfaces4.These eigenvectors are called eigenfacesS. Narasimhan 14Eigenfaces for Face Recognition• When properly weighted, eigenfaces can be pp y g ,gsummed together to create an approximate gray-scale rendering of a human face. • Remarkably few eigenvector terms are needed to give a fair likeness of most people's faces. gpp• Hence eigenfaces provide a means of applying data itf f id tifi ticompressionto faces for identification purposes.S. Narasimhan 15Dimensionality ReductionThe set of faces is a “subspace” of the set of images– Suppose it is K dimensional– We can find the best subspace using PCA– This is like fitting a “hyper-plane” to the set of faces• spanned by vectors v1, v2, ..., vKAfAny face:S. Narasimhan 16Eigenfaces• PCA extracts the eigenvectors of A– Gives a set of vectors v1, v2, v3, ...Each one of these ectors is a direction in face space–Each one of these vectors is a direction in face space• what do these look like?17Projecting onto the Eigenfaces• The eigenfaces v1, ..., vKspan the space of faces– A face is converted to eigenface coordinates byS. Narasimhan 18Is this a face or not?19Recognition with Eigenfaces• Algorithm1. Process the image database (set of images with labels)•Run PCAcomputeeigenfaces•Run PCA—compute eigenfaces• Calculate the K coefficients for each image2. Given a new image (to be recognized) x, calculate K coefficients3. Detect if x is a face4. If it is a face, who is it?• Find closest labeled face in databaseihb i Kdi i l• nearest-neighbor in K-dimensional space20S. NarasimhanKey Property of Eigenspace RepresentationGiven • 2 images that are used to construct the Eigenspace• is the eigenspace


View Full Document

STEVENS CS 559 - Machine Learning Fundamentals and Applications 9th Set of Notes

Download Machine Learning Fundamentals and Applications 9th Set of Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Machine Learning Fundamentals and Applications 9th Set of Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Machine Learning Fundamentals and Applications 9th Set of Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?