Reducing Data DimensionRequired reading: • “A Tutorial on PCA,’ J. Schlens, http://www.snl.salk.edu/~shlens/pub/notes/pca.pdfMachine Learning 10-601April 7, 2008Tom M. MitchellMachine Learning DepartmentCarnegie Mellon UniversityOutline• Feature selection–Single feature scoring criteriagg– Search strategiesUns per ised dimension red ction sing all feat res•Unsupervised dimension reduction using all features– Principle Components Analysis– Singular Value Decomposition– Independent components analysis•Supervised dimension reduction•Supervised dimension reduction– Fisher Linear Discriminant– Hidden layers of Neural NetworksDimensionality ReductionWhy?• Learning a target function from data where some features are irrelevant - reduce variance, improve accuracy•Wish to visualize high dimensional data•Wish to visualize high dimensional data•Sometimes have data whose“intrinsic”dimensionality isSometimes have data whose intrinsic dimensionality is smaller than the number of features used to describe it -recover intrinsic dimensionSupervised Feature SelectionSupervised Feature SelectionSupervised Feature SelectionProblem: Wish to learn f: X Æ Y, where X=<X1, …XN>But suspect not all Xiare relevantBut suspect not all Xiare relevantApproach: Preprocess data to select only a subset of the Xipp p yi• Score each feature, or subsets of features–How?Shf flbtfft t tdt•Search for useful subset of features to represent data–How?Scoring Individual Features XiCommon scoring methods:• Training or cross-validated accuracy of single-feature classifiers fi: XiÆYfii• Estimated mutual information between Xiand Y : • χ2statistic to measure independence between Xiand Y• Domain specific criteria– Text: Score “stop” words (“the”, “of”, …) as zero–fMRI: Score voxel by T-test for activation versus rest conditiony–…Choosing Set of Features to learn F: XÆYCommon methods:F d1 Ch th f t ith th hi h tForward1: Choose the n features with the highest scoresForward2:Forward2:– Choose single highest scoring feature Xk– Rescore all features, conditioned on the set of ldltdftalready-selected features• E.g., Score(Xi | Xk) = I(Xi,Y |Xk) • E.g, Score(Xi | Xk) = Accuracy(predicting Y from Xiand Xk)– Repeat, calculating new scores on each iteration, conditioning on set of selected featuresChoosing Set of FeaturesCommon methods:1S fBackward1: Start with all features, delete the n with lowest scoresBackward2: Start with all features, score each feature conditioned on assumption that all others are included. Then:Then:– Remove feature with the lowest (conditioned) score– Rescore all features, conditioned on the new, reduced feature setRepeat–RepeatFeature Selection: Text Classification[Rogati&Yang, 2002]Approximately 105words in EnglishIG=information gain, chi= χ2, DF=doc frequency,Impact of Feature Selection on Classification of fMRI Data[P i t l 2005]ata[Pereira et al., 2005]Accuracy classifying category of word read by subjectby subjectVoxels scored by p-value of regression to predict voxel value from the taskApproach 2: RegularizationKey idea: add L1 penalty to learning objective, to penalize large weightspgg• L1 penalty = sum of magnitudes of weights• think about L1 vs L2 for logistic regression…Summary: Supervised Feature SelectionApproach 1: Preprocess data to select only a subset of the Xi•Score each featureScore each feature– Mutual information, prediction accuracy, …• Find useful subset of features based on their scores– Greedy addition/deletion of features to pool– Considered independently, or in context of other selected featuresApproach 2: use L1 regularization of parametersAlways do feature selection using training set only (why?)– Often use nested cross-validation loop:•Outer loop to get unbiased estimate of final classifier accuracy•Outer loop to get unbiased estimate of final classifier accuracy• Inner loop to get unbiased feature scores for feature selectionUnsupervised Dimensionality ReductionUnsupervised Dimensionality ReductionUnsupervised mapping to lower dimensionDiffers from feature selection in two ways:• Instead of choosing subset of features, create new features (dimensions) defined as functions over all features • Don’t consider class labels, just the data pointsPrinciple Components Analysis• Idea: –Given data points in d-dimensional space, project into lower pp,pjdimensional space while preserving as much information as possible• E.g., find best planar approximation to 3D data• E.g., find best planar approximation to 104D data–In particular, choose projection that minimizes the squared errorIn particular, choose projection that minimizes the squared error in reconstructing original dataPCA: Find Projections to Minimize Reconstruction ErrorAssume data is set of d-dimensional vectors, where nth vector isWe can represent these in terms of any d orthogonal basis vectorsWe can represent these in terms of any d orthogonal basis vectorsu2u1PCA: given M<d. Find x2that minimizeswhere x1MeanPCAu2u1xx2PCA: given M<d. Find that minimizesx1where Note we get zero error if M=d, so all error is due to missing components.Therefore, This minimized when ui is eigenvector of Σ, the covariance matrix of XCovariance matrix:covariance matrix of X. i.e., minimized when:PCAu2u1x2Minimizex1Eigenvector of ΣEigenvalue (scalar)PCA algorithm 1:1. X Å Create N x d data matrix, with one row vector xnper data point2. X Å subtract mean x from each row vector xnin X3ΣÅcovariance matrix of X3.ΣÅcovariance matrix of X4. Find eigenvectors and eigenvalues of Σ5. PC’s Å the M eigenvectors with largest eigenvaluesPCA ExamplemeanFirst eigenvectorSecondSecond eigenvectorPCA ExampleReconstructed data using only first eigenvector (M=1)meanFirst iteigenvectorSecond eigenvectorVery Nice When Initial Dimension Not Too BigWhat if very large dimensional data?•e g Images (d10^4)e.g., Images (d ¸10 4)Problem:Problem:• Covariance matrix Σ is size (d x d)•d=104Æ | Σ| = 108Si l V l D iti (SVD) t th !Singular Value Decomposition (SVD) to the rescue!• pretty efficient algs available, including Matlab SVD• some implementations find just top N eigenvectorsSVDData X, one row per data Rows of VTare unit length eigenvectors ofS is diagonal, S>SUS gives coordinatesppointlength eigenvectors of XTXIf cols of X have zero mean thenXTX=cΣSk> Sk+1, Sk2is kth largest eigenvaluecoordinates of
View Full Document