Unformatted text preview:

6.891Last time: “Segmentation and Clustering (Ch. 14)”LearnedModelBackground Techniques ComparedMean Shift AlgorithmGraph-Theoretic Image SegmentationEigenvectors and affinity clustersToday “Fitting and Segmentation (Ch. 15)”RobustnessRobust StatisticsEstimating the meanEstimating the MeanEstimating the meanEstimating the meanInfluenceWhat’s Wrong?ApproachApproachApproachL1 NormRedescending FunctionRobust EstimationRobust scaleExample: MotionEstimating FlowDeterministic AnnealingContinuation methodFragmented OcclusionResultsResultsMultiple Motions, againRobust estimation models only a single process explicitlyAlternative ViewEM General frameworkMissing variable problemsMissing variable problemsMissing variable problemsStrategyMotion SegmentationSmoothness in layersLayered RepresentationEM in PicturesEM in PicturesEM in PicturesEM in EquationsEM in EquationsEM in PicturesEM in PicturesEM in PicturesEM in PicturesEM in PicturesEM in PicturesEM in PicturesEM in PicturesMotion segmentation ExampleLinesLine fitting reviewThe E StepThe M StepIssues with EMChoosing parametersEstimating the number of modelsFitting 2 lines to data pointsThe E StepThe M StepIllustrationsIllustrationIllustrationColor segmentation ExampleEM for Mixture modelsColor segmentation with EMColor segmentation with EMColor segmentationColor segmentation with EMColor segmentation with EME-stepM-stepModel SelectionDiscountsCross-validationExtreme segmentationRANSACRANSACRANSAC applicationsFitting and ProbabilisticSegmentation16.891Computer Vision and ApplicationsProf. Trevor. DarrellLecture 15: Fitting and SegmentationReadings: F&P Ch 15.3-15.5,162Last time: “Segmentation and Clustering (Ch. 14)”• Supervised->Unsupervised Category Learning needs segmentation •K-Means• Mean Shift• Graph cuts• Hough transform3From: Rob Fergus http://www.robots.ox.ac.uk/%7Efergus/[Slide from Bradsky & Thrun, Stanford]4LearnedModelFrom: Rob Fergus http://www.robots.ox.ac.uk/%7Efergus/The shape model. The mean location is indicated by the cross, with the ellipse showing the uncertainty in location. The number by each part is the probability of that part being present.56Background Techniques ComparedFrom the Wallflower Paper7Mean Shift AlgorithmMean Shift Algorithm1. Choose a search window size.2. Choose the initial location of the search window.3. Compute the mean location (centroid of the data) in the search window.4. Center the search window at the mean location computed in Step 3.5. Repeat Steps 3 and 4 until convergence.The mean shift algorithm seeks the “mode” or point of highest density of a data distribution:8Graph-Theoretic Image SegmentationBuild a weighted graph G=(V,E) from imageV: image pixelsE: connections between pairs of nearby pixelsregion same the tobelong j& iy that probabilit :ijW9Eigenvectors and affinity clusters• Simplest idea: we want a vector a giving the association between each element and a cluster• We want elements within this cluster to, on the whole, have strong affinity with one another• We could maximize • But need the constraint • Shi/Malik, Scott/Longuet-Higgens, Ng/Jordan/Weiss, etc.• This is an eigenvalue problem -choose the eigenvector of A with largest eigenvalueaTAaaTa=110Hough transformtokensvotes11Today “Fitting and Segmentation (Ch. 15)”• Robust estimation•EM• Model Selection• RANSAC(Maybe “Segmentation I” and “Segmentation II” would be a better way to split these two lectures!)12Robustness• Squared error can be a source of bias in the presence of noise points– One fix is EM - we’ll do this shortly– Another is an M-estimator• Square nearby, threshold far away– A third is RANSAC• Search for good points1314151617Robust Statistics• Recover the best fit to the majority of the data.• Detect and reject outliers.18Estimating the meanGaussian distribution02 463∑==NiidN11µMean is the optimal solution to:∑=−Niid12)(minµµresidual19Estimating the MeanThe mean maximizes this likelihood:∏=−−=Niiiddp122)/)(21exp(21)|(maxσµσπµµThe negative log gives (with sigma=1):∑=−Niid12)(minµµ“least squares” estimate20Estimating the mean02 4621Estimating the meanWhat happens if we change just one measurement?6+∆02 4N∆+=µµ'With a single “bad” data point I can move the mean arbitrarily far.22InfluenceBreakdown point* percentage of outliers required to make the solution arbitrarily bad.Least squares:* influence of an outlier is linear (∆/N)* breakdown point is 0% -- not robust!02 46+∆What about the median?23What’s Wrong?∑=−Niid12)(minµµOutliers (large residuals) have too much influence.2)( xx =ρxx 2)(=ψ24ApproachInfluence is proportional to the derivative of the ρ function.Want to give less influence to points beyond some value.25Approach∑=−Niid1),(minσµρµScale parameterRobust error functionReplace2),(=σσρxxwith something that gives less influence to outliers.26Approach∑=−Niid1),(minσµρµScale parameterRobust error functionNo closed form solutions!- Iteratively Reweighted Least Squares- Gradient Descent27L1 Norm||)( xx=ρ)(sign)( xx =ψ28Redescending FunctionBeyond a point, the influence begins to decrease.Beyond where the second derivative is zero – outlier pointsTukey’s biweight.29Robust EstimationRobust EstimationGeman-McClure function works well.Twice differentiable, redescending.Influence function (d/dr of norm):2222)(2),(rrr+=σσσψ222),(rrr+=σσρ3031Robust scaleScale is critical!Popular choice:32Too small33Too large34Just right35Example: MotionAssumption: Within a finite image region, there is only a single motion present.Violated by: motion discontinuities, shadows, transparency, specular reflections…Violations of brightness constancy result in large residuals:36Estimating FlowEstimating FlowMinimize:),I);(I);(I()(σρtuxRvuE ++=∑∈axaxaxParameterized models provide strong constraints:* Hundred, or thousands, of constraints.* Handful (e.g. six) unknowns.Can be very accurate (when the model is good)!37Deterministic AnnealingStart with a “quadratic” optimization problem and gradually reduce outliers.38Continuation methodGNC: Graduated Non-Convexity39Fragmented Occlusion40Results41Results42Multiple Motions, againXFind the dominant motion while rejecting outliers.Black & Anandan; Black & Jepson43Robust estimation models only a single process explicitly););(()(,σρ∑∈+∇=RyxtTIIE axuaRobust norm:Assumption:Constraints that


View Full Document

MIT 6 891 - Lecture 15: Fitting and Segmentation

Download Lecture 15: Fitting and Segmentation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 15: Fitting and Segmentation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 15: Fitting and Segmentation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?