Unformatted text preview:

CS 664Segmentation (2)Daniel Huttenlocher2Recap Last time covered perceptual organization more broadly, focused in on pixel-wise segmentation Covered local graph-based methods such as MST and Felzenszwalb-Huttenlocher method Today– Cut-based methods such as grab cut, normalized cuts– Iterative local update methods such as mean shift3Cut Based Techniques For costs, natural to consider minimum cost cuts– Removing edges with smallest total cost, that cut graph in two parts– Graph only has finite-weight edges Manually assisted techniques, foreground vs. background General segmentation, recursively cut resulting components– Question of when to stop4Image Segmentation & Minimum CutImagePixelsPixel NeighborhoodwSimilarityMeasureMinimumCut5Segmentation by Min (s-t) Cut Manually select a few fg and bg pixels– Infinite cost link from each bg pixel to the “t” node, and each fg pixel to “s” node– Compute min cut that separates s from ttsmin cut[Boykov 01]6Grabcut [Rother et al., SIGGRAPH 2004]7qAutomatic Cut-Based Segmentation Fully-connected graph– Node for every pixel– Link between every pair of pixels, p,q– Cost for each link measures similaritypCpqc8Drawbacks of Minimum Cut Weight of cut proportional to number of edges – preference for small regions– Motivation for Shi-Malik normalized cuts“Ideal Cut”Cuts with lesser weightthan the ideal cut* Slide from Khurram Hassan-Shafique CAP5415 Computer Vision 20039Normalized Cuts A number of normalization criteria have been proposed One that is commonly used [Shi&Malik ] Where cut(A,B) is standard definition∑i∈A,j∈Bwij And assoc(A,V) = ∑j ∑i∈A wijNcut(A,B) = cut(A,B) cut(A,B)assoc(B,V)assoc(A,V)+10Computing Normalized Cuts Has been shown this is equivalent to an integer programming problem, minimizeyT(D-W)yyTD y Subject to the constraint that yi∈{1,b} and yTD1=0– Where 1 vector of all 1’s W is the affinity matrix D is the degree matrix (diagonal)D(i,i) = ∑jwij11Approximating Normalized Cuts Integer programming problem NP hard– Instead simply solve continuous (real-valued) version– This corresponds to finding second smallest eigenvector of(D-W)yi= λiDyi Widely used method– Works well in practice• Large eigenvector problem, but sparse matrices• Often resolution reduce images, e.g, 100x100– But no longer clearly related to cut problem12Normalized Cut Examples13Another Look [Weiss 99] Consider eigen analysis of affinity matrixW = [ wij]– Note W is symmetric; for images wij=wji– W also essentially block diagonal• With suitable rearrangement of rows/cols so that vertices with higher affinity have nearer indices• Entries far from diagonal are small (though not quite zero) Eigenvectors of W– Recall for real, symmetric matrix forms an orthogonal basis• Axes of decreasing “importance”14Structure of W Eigenvectors of block diagonal matrix consist of eigenvectors of the blocks– Padded with zeroes Note rearrangement so that clusters lie near diagonal only conceptual– Eigenvectors of permuted matrix are permutation of original eigenvectors Can think of eigenvectors as being associated with high affinity “clusters”– Eigenvectors with large eigenvalues– Approximately the case15Structure of W Consider case of point set where affinities wij=exp(-(yi-yj)2/σ2) With two clusters– Points indexed to respect clusters for clarity Block diagonal form of W– Within cluster affinities A, B for clusters– Between cluster affinity CABCCTW=16First Eigenvector of W Recall, vectors xisatisfying Wxi=λixi Consider ordered by eigenvalues λi– First eigenvector x1has largest eigenvalue λ1 Elements of first eigenvector serve as “index vector” [Perona,Freeman]– Selecting elements of highest affinity clusterPoints in planeElements of x1WMagnitude of elements17Clustering First eigenvector of W has been suggested as clustering or segmentation criterion– For selecting most significant segment– Then recursively segment remainder Problematic when nonzero non-diagonal blocks (similar affinity clusters)Points in planeElements of x1W18Understanding Normalized Cuts Intractable discrete graph problem used to motivate continuous (real valued) problem– Find second smallest “generalized eigenvector”(D-W)xi= λiDxi– Where D is (diagonal) degree matrix dii= ∑jwij Can be viewed in terms of first two eigenvectors of normalized affinity matrix– Let N=D-1/2WD-1/2– Note nij=wij/(√dii√djj)• Affinity normalized by degree of the two nodes19Normalized Affinities Can be shown that– If x is an eigenvector of N with eigenvalue λthen D-1/2x is a generalized eigenvector of W with eigenvalue 1-λ– The vector D-1/21 is an eigenvector of N with eigenvalue 1 It follows that – Second smallest generalized eigenvector of W is ratio of first two eigenvectors of N– So ncut uses normalized affinity matrix N and first two eigenvectors rather than affinity matrix W and first eigenvector20Contrasting W and N Three simple point clustering examples– W, first eigenvector of W, ratio of first two eigenvectors of N (generalized eigenvector of W)21Image Segmentation Considering W and N for segmentation– Affinity a negative exponential based on distance in x,y,b space Eigenvectors of N morecorrelated with regionsFirst 4 eigenvectors of WFirst 4 eigenvectors of N22Using More Eigenvectors Based on k largest eigenvectors– Construct matrix Q such that (ideally) qij=1 if i and j in same cluster, 0 otherwise Let V be matrix whose columns are first k eigenvectors of W Normalize rows of V to have unit Euclidean norm– Ideally each node (row) in one cluster (col) Let Q=VVT– Each entry product of two unit vectors23Normalization and k Eigenvectors Normalized affinities help correct for variations in overall degree of affinity– So compute Q for N instead of W Contrasting Q with ratio of first two eigenvectors of N (ncut criterion)– More clearly selects most significant region• Using k=6 eigenvectors– Row of Q matrix vs. ratio of eigenvectors of NQQNN24Spectral Methods Eigenvectors of affinity and normalized affinity matrices Widely used outside computer vision for graph-based clustering– Link structure of web pages, citation structure of scientific papers– Often directed rather than undirected graphs25Iterative Clustering Methods


View Full Document
Download Study Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?