Unformatted text preview:

116.891Computer Vision and ApplicationsProf. Trevor. DarrellLecture 15: Fitting and SegmentationReadings: F&P Ch 15.3-15.5,162• Supervised->Unsupervised Category Learning needs segmentation • K-Means• Mean Shift• Graph cuts• Hough transformLast time: “Segmentation and Clustering (Ch. 14)”3From: Rob Fergus http://www.robots.ox.ac.uk/%7Efergus/[Slide from Bradsky & Thrun, Stanford]4LearnedModelFrom: Rob Fergus http://www.robots.ox.ac.uk/%7Efergus/The shape model. The mean location is indicated by the cross, with the ellipse showing the uncertainty in location. The number by each part is the probability of that part being present. 5 6Background Techniques ComparedFrom the Wallflower Paper27Mean Shift AlgorithmMean Shift Algorithm1. Choose a search window size.2. Choose the initial location of the search window.3. Compute the mean location (centroid of the data) in the search window.4. Center the search window at the mean location computed in Step 3.5. Repeat Steps 3 and 4 until convergence.The mean shift algorithm seeks the “mode” or point of highest density of a data distribution:8Graph-Theoretic Image SegmentationBuild a weighted graph G=(V,E) from imageV: image pixelsE: connections between pairs of nearby pixelsregion same the tobelong j& iy that probabilit :ijW9Eigenvectors and affinity clusters• Simplest idea: we want a vector a giving the association between each element and a cluster• We want elements within this cluster to, on the whole, have strong affinity with one another• We could maximize • But need the constraint • Shi/Malik, Scott/Longuet-Higgens, Ng/Jordan/Weiss, etc.• This is an eigenvalue problem -choose the eigenvector of A with largest eigenvalueaTAaaTa = 110tokensvotesHough transform11• Robust estimation•EM• Model Selection• RANSAC(Maybe “Segmentation I” and “Segmentation II” would be a better way to split these two lectures!)Today “Fitting and Segmentation (Ch. 15)”12Robustness• Squared error can be a source of bias in the presence of noise points– One fix is EM - we’ll do this shortly– Another is an M-estimator• Square nearby, threshold far away– A third is RANSAC• Search for good points313 1415 1617Robust Statistics• Recover the best fit to the majority of the data.• Detect and reject outliers.18Estimating the mean02 46Gaussian distribution3∑==NiidN11µMean is the optimal solution to:∑=−Niid12)(minµµresidual419Estimating the Mean∏=−−=Niiiddp122)/)(21exp(21)|(maxσµσπµµThe mean maximizes this likelihood:The negative log gives (with sigma=1):∑=−Niid12)(minµµ“least squares” estimate20Estimating the mean02 4621Estimating the mean02 46+∆What happens if we change just one measurement?N∆+=µµ'With a single “bad” data point I can move the mean arbitrarily far.22InfluenceBreakdown point* percentage of outliers required to make the solution arbitrarily bad.Least squares:* influence of an outlier is linear (∆/N)* breakdown point is 0% -- not robust!02 4 6+∆What about the median?23What’s Wrong?∑=−Niid12)(minµµOutliers (large residuals) have too much influence.2)( xx =ρxx 2)( =ψ24ApproachWant to give less influence to points beyond some value.Influence is proportional to the derivative of the ρ function.525Approach∑=−Niid1),(minσµρµRobust error functionScale parameterReplace2),(=σσρxxwith something that gives less influence to outliers.26Approach∑=−Niid1),(minσµρµRobust error functionScale parameterNo closed form solutions!- Iteratively Reweighted Least Squares- Gradient Descent27L1 Norm||)( xx =ρ)(sign)( xx =ψ28Redescending FunctionTukey’s biweight.Beyond a point, the influence begins to decrease.Beyond where the second derivative is zero – outlier points29Robust EstimationRobust EstimationGeman-McClure function works well.Twice differentiable, redescending.222),(rrr+=σσρ2222)(2),(rrr+=σσσψInfluence function (d/dr of norm):30631Scale is critical!Popular choice:Robust scale32Too small33Too large34Just right35Example: MotionAssumption: Within a finite image region, there is only a single motion present.Violated by: motion discontinuities, shadows, transparency, specular reflections…Violations of brightness constancy result in large residuals:36Estimating FlowEstimating Flow),I);(I);(I()(σρtuxRvuE ++=∑∈axaxaxMinimize:Parameterized models provide strong constraints:* Hundred, or thousands, of constraints.* Handful (e.g. six) unknowns.Can be very accurate (when the model is good)!737Deterministic AnnealingStart with a “quadratic” optimization problem and gradually reduce outliers.38Continuation methodGNC: Graduated Non-Convexity39Fragmented Occlusion40Results41Results42Multiple Motions, againXFind the dominant motion while rejecting outliers.Black & Anandan; Black & Jepson843Robust estimation models only a single process explicitlyAssumption:Constraints that don’t fit the dominant motion are treated as “outliers” (noise).Problem?););(()(,σρ∑∈+∇=RyxtTIIE axuaRobust norm:They aren’t noise!44Alternative View* There are two things going on simultaneously.* We don’t know which constraint lines correspond to which motion.* If we knew this we could estimate the multiple motions.- a type of “segmentation” problem* If we knew the segmentation then estimating the motion would be easy.45Estimate parameters from segmented data.Consider segmentation labels to be missing data.EM General framework46Missing variable problemsA missing data problem is a statistical problem where some data is missingThere are two natural contexts in which missing data are important:• terms in a data vector are missing for some instances and present for other (perhaps someone responding to a survey was embarrassed by a question)• an inference problem can be made very much simpler by rewriting it using some variables whose values are unknown.47Missing variable problemsA missing data problem is a statistical problem where some data is missingThere are two natural contexts in which missing data are important:• terms in a data vector are missing for some instances and present for other (perhaps someone responding to a survey was embarrassed by a question)• an inference problem can be made very much simpler by rewriting it using some variables whose values are unknown.48Missing variable problemsIn many vision problems, if some variables were known the maximum likelihood inference problem would be


View Full Document

MIT 6 891 - Computer Vision and Applications

Download Computer Vision and Applications
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Computer Vision and Applications and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Computer Vision and Applications 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?