Unformatted text preview:

CS 664Image Matching and Robust FittingDaniel Huttenlocher2Matching and Fitting Recognition and matching are closely related to fitting problems Parametric fitting can serve as more restricted domain for investigating questions of noise and outlier– Methods robust in presence of noise Two widely used techniques– RANSAC– Hough transform Generalized to matching and recognition3How Many “Good” Linear Fits?4RANSAC RANdom SAmple Consensus– Fischler and Bolles, 1981 Select small number of data points and use to generate instance of model– E.g., fit to a line Check number of data points consistent with this fit Iterate until “good enough” consistent set Generate new fit from this set5RANSACObjectiveRobust fit of model to data set S which contains outliersAlgorithm(i) Randomly select a sample of s data points from S and instantiate the model from this subset.(ii) Determine the set of data points Siwhich are within a distance threshold t of the model. The set Siis the consensus set of samples and defines the inliers of S.(iii) If the subset of Siis greater than some threshold T, re-estimate the model using all the points in Siand terminate(iv) If the size of Siis less than T, select a new subset and repeat the above.(v) After N trials the largest consensus set Siis selected, and the model is re-estimated using all the points in the subset Si6RANSAC Line Fitting ExampleTask:Estimate best line7RANSAC Line Fitting ExampleSample two points8RANSAC Line Fitting ExampleFit line9RANSAC Line Fitting ExampleTotal number of points within a threshold of line10RANSAC Line Fitting ExampleRepeat, until get a good result11RANSAC Line Fitting ExampleRepeat, until get a good result12RANSAC Line Fitting ExampleRepeat, until get a good result13Choosing Number of Samples Choose N samples so that, with probability p, at least one random sample is free from outliers– E.g. p=0.99 Let e denote proportion of outliers– Data points that do not fit the model within the distance threshold t Probability of selecting all inliers– Sampling without replacement, not independent• E.g., D data points and I inliers14 Probability of s samples all being inliers For s<<D approximate by (I/D)sor (1-e)s Now want to choose N so that, with probability p, at least one random sample is free from outliersChoosing Number of Samples∏−=−−10siiDiI()()peNs−=−− 11115Choosing Number of Samples()()()sepN −−−= 11log/1log1177272784426958588163543320847293973724167461465726171264572341713953435191197433171176532250%40%30%25%20%10%5%sproportion of outliers e16Adaptively Choosing N Fraction of outliers is often unknown a priori– Pick “worst” case, e.g. 50%, and adapt if more inliers are found– N=∞, sample_count =0– While N >sample_count repeat• Choose a sample and count the number of inliers• Set e=1-(number of inliers)/(total number of points)• Recompute N from e• Increment the sample_count by 1– Terminate17Number of Samples II Make take more samples than one would think due to degenerate point sets18Number of Samples II These two points are inliers19Number of Samples II And yet the estimate yielded is poor20Determine Potential Correspondences Compare interest points–E.g., similarity measure: SAD, SSD on small neighborhood Note: can use correlation score to bias the selection of the samples selecting matches with a better correlation score more often Note multiple matches for each point can be RANSAC’ed on (although this increases the proportion of outliers)21Example: Robust ComputationInterest points(500/image)Putative correspondences (268)Outliers (117)Inliers (151)Final inliers (262)22Example: 2D Similarity TransformationSet 1 Set 223Example: 2D Similarity TransformationSet 1 Set 2Set of matches from some correlation function, lighter ones incorrect24Example: 2D Similarity TransformationSet 1 Set 2Two matches, used to infer transform, Here: Top match correct, bottom incorrect25Example: 2D Similarity TransformationSet 1 Set 2Features mapped under transform do not align well26Example; 2D Similarity TransformationSet 1 Set 2On the other hand, if we pick two correct matches (modulo noise)27Example: 2D Similarity TransformationSet 1 Set 2Alignment is good!28Cost Function  RANSAC can be vulnerable to the correct choice of the threshold– Too large all hypotheses are ranked equally– Too small leads to an unstable fit Same strategy can be followed with any modification of the cost function29Threshold too high30Threshold too highThis solution…31Threshold too highIs as good as this solution32Threshold too low-no support33Cost Function  Examples of other cost functions– Least Median Squares; i.e. take the sample that minimized the median of the residuals– MAPSAC/MLESAC use the posterior or likelihood of the data– MINPRAN (Stewart), makes assumptions about randomness of data34LMS Repeat M times:– Sample minimal number of matches to estimate two view relation– Calculate error of all data– Choose relation to minimize median of errors35Pros and Cons LMS PRO– Do not need any threshold for inliers– Can yield robust estimate of variance of errors CON– Cannot work for more than 50% outliers36Robust Maximum Likelihood EstimationRandom Sampling can optimize any function:Better, robust cost function, MLESACProbability of data given instantiation of modelMaximum likelihood or MAP estimation37MLESAC/MAPSACThis solution…38MLESAC/MAPSACIs better than this solution39MAPSAC Add in prior to get to MAP solution With MAPSAC one could sample less than the minimal number of points to make an estimate (using prior as extra information) Any posterior can be optimized; random sampling good for matching and function optimization– E.g. MAPSAC is a way to optimize objective functions regardless of outliers or not40Underlying Assumptions LMS criterion– Minimum fraction of inliers is known RANSAC criterion– Inlier bound is known41Not Necessarily Desirable Structures may be “seen”in data despite unknown scale and large outlier fractions Potential unknown properties:– Sensor characteristics– Scene complexity– Performance of low-level operations Problems:– Handling unknown scale– Handling varying scale 45% random outliers44% from 1ststructure 11% from 2ndstructure42Goal A robust objective function, suitable for use in random-sampling algorithm, that is –


View Full Document
Download Study References
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study References and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study References 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?