CS664 Lecture #27: Some Topics We Didn’t CoverRemindersFinal ProjectToday’s Topics1. Particle filteringImportance samplingParticle filtering issues2. Learning low-level visionTraining dataExamplesOutput depends on training3. Graph-based SegmentationSegmentation via min cutWu and LeahyNormalized cutsNormalized cut criterionSome eigenvector factsSpectral graph partitioningEigenvector propertiesSpectral graph bisectionComputing Normalized CutsEigenvector approximationNormalized Cut ExamplesNested cutsPropertiesMST-based segmentationMST segmentation examplesConclusionsCS664 Lecture #27: Some Topics We Didn’t CoverSome material taken from:Yuri Boykov & Olga Veksler, University of Western OntarioUri Feige, Weizmann Institute & Microsoft Research2Reminders PS3 is due Friday 12/2 Papers will be graded by tomorrow– I will email you about how to pick them up Please fill out course evaluations– In particular, I’d like to hear about:• Choice of papers we covered/didn’t cover• Effectiveness of the quizzes Final project is due Thursday 12/15– The next slide is from Lecture 1, 8/25/053Final Project Most of the course grade comes from this– It is actually possible to get an F in CS664 You need to pick a project by midway through the term, and work hard on it– Write-up due during final exam period– Proposal due by November 1 Must be original research – i.e., non-trivial idea that has never been done Not required to solve a vision problem– Document idea’s strengths and weaknesses4Today’s Topics Particle filters Learning low-level vision Segmentation via graphs51. Particle filtering Another alternative is to represent the posterior distribution by a set of particles– Points with probabilities (point masses) Two phases: prediction, update– Prediction: each particle is modified according to a “motion model”• Including random noise– Update: weights re-evaluated using the new observed data6Importance sampling We have samples from the prior distribution p(x), and the likelihood p(z|x)– We need to sample from posterior p(x|z)– More generally, given samples from g(x), how do we create samples from h(x)? We can rescale the weight of a sample from g(x) by the ratio h(x)/g(x)– Then we normalize– In our example, we scale up by the likelihood• I.e., multiply a particle xi’s weight by the likelihood p(z|xi)7Particle filtering issues With enough particles you can get a very good estimate of the posterior– But it may not be tractable– Especially in high-dimensional spaces Example difficulty: particles can easily disappear (resampling problem) Generally, a successful approach, especially for hand tracking82. Learning low-level vision Super-resolution problem (Astronomy)– Create a high-resolution image from a low-resolution one– Obviously this is ill-posed, but it’s sometimes possible with enough prior information– A nice problem from a Bayesian perspective In our low-resolution image, we have an observed intensity oiat each pixel– This is created from a high-resolution true image hiby averaging plus noise9Training data We can take a bunch of high-resolution images and create low-resolution versions Based on this, we will create our labels– A set of hypotheses {hi} for each intensity oi Need a prior that relates neighboring hypotheses hi,hj Can make two hypotheses overlap slightly – Determine compatability by the difference in intensities in the overlap– Can estimate parameters from training data10ExamplesCubic splines Learning11Output depends on training123. Graph-based Segmentation Why can’t we use min cut to directly segment an image?– Weights between edges are “affinities”• I.e., decreasing function of |Ip–Iq|– Where do the source and sink go? Two answers– Use variant on min cut where there is no source and sink• E.g., Karger’s algorithm– Nested cuts13Segmentation via min cutImagePixelswSimilarityMeasureMinimumCut14Wu and Leahy Recursively split the image in 2– Each stage is fast Problems:– Image may have no natural binary split!– Bias towards small segmentations15Normalized cuts We get in trouble by minimizing cut(A,B) = ∑i∈A,j∈Bwij We can normalize this by using assoc(A,S) = ∑j∈A∑i∈S wij– assoc(A,A) measures the association within A A “normalized” measure of how similar A and B are within themselves is:16Normalized cut criterion Consider the normalized cut measure– This normalizes the value of cut(A,B)– Minimizing Ncut usually avoids small partitions• Because assoc(A,V) is small if A is small Can show Ncut(A,B) = 2 – Nassoc(A,B)– So we can simultaneously find a cut that avoids small partitions while maximizing similarity within A and B17Some eigenvector facts An eigenvector x with eigenvalue λ obeys:Wx = λx– Note that a multiple of x is also an eigenvector– There can be multiple different eigenvectors with the same eigenvalue λ• The multiplicity of λ If W is symmetric:– Eigenvalues are real– Eigenvectors are orthogonal18Spectral graph partitioning Consider a graph where all wij=1– Can use multiple edges for arbitrary weights– Also, assume all nodes have degree d– Assign the weight xito the node i What is an eigenvector of W? Example for λ = 3 2111119Eigenvector properties The largest eigenvector for this graph has eigenvalue λ = d– One eigenvector is 1• All other eigenvectors are orthogonal to 1• So their entries sum to 0 The set of eigenvalues of W is called the graph’s spectrum– The 2ndlargest eigenvector can be used for spectral graph partitioning Example: bisectable graphs (|A|=|B|)20Spectral graph bisection An eigenvector with λ = d assigns one component +1 and the other -1 If each node has εd neighbors, this is still an eigenvector with λ = d(1-2ε) Instead of the adjacency matrix, often use the graph’s Laplacian L=D-W– D(i,i) = ∑jwijis the degree matrix (diagonal)– Seek the 2ndsmallest eigenvalue of L instead of the 2ndlargest of W– This eigenvector is called the Fiedler vector21Computing Normalized Cuts Minimizing Ncut is equivalent to the integer programming problem: minimizeyT(D-W)yyTD y W is the affinity matrix Minimize under: yi∈{0,1} and yTD1=0 Close to spectral graph partitioning– Beware of integrality constraints!22Eigenvector approximation Integer
View Full Document