DOC PREVIEW
UCSB ECE 181B - Introduction to Computer Vision

This preview shows page 1-2-14-15-29-30 out of 30 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

May 2004 Stereo 1Introduction to Computer VisionCS / ECE 181BTuesday, May 11, 2004 Multiple view geometry and stereo Handout #6 available (check with Isabelle)Ack: M. Turk and M. PollefeysMay 2004 Stereo 2MidtermMay 2004 Stereo 3Seeing in 3D• Humans can perceive depth, shape, etc. – 3D properties ofthe world– How do we do it?• We use many cues– Oculomotor convergence/divergence– Accomodation (changing focus)– Motion parallax (changing viewpoint)– Monocular depth cues Occlusion, perspective, texture gradients, shading, size– Binocular disparity (stereo)• How can computers perceive depth?May 2004 Stereo 4May 2004 Stereo 5May 2004 Stereo 6Multiple views and depthMay 2004 Stereo 7Why multiple views?• A camera projects the 3D world into 2D images• This is not always a problem – humans can figure out a lotfrom a 2D view!May 2004 Stereo 8Why multiple views?• But precise 3D information (distance, depth, shape,curvature, etc.) is difficult or impossible to obtain from asingle view• In order to measure distances, sizes, angles, etc. we needmultiple views (and calibrated cameras!)– Monocular  binocular  trinocular…C1C2C3May 2004 Stereo 9Multiple view geometryC1C2C3• Two big questions for multiple view geometry problems:– Which are possible?– Which are most likely?• There are many possible configurations of scene pointsthat could have created corresponding points in multipleviewsMay 2004 Stereo 10Questions• Correspondence geometry: Given an image point x in the firstview, how does this constrain the position of the correspondingpoint x’ in the second image?• Camera geometry (motion): Given a set of correspondingimage points {xi _x’i}, i=1,…,n, what are the cameras P and P’ forthe two views?• Scene geometry (structure): Given corresponding imagepoints xi _x’i and cameras P, P’, what is the position of (theirpre-image) X in space?M. PollefeysMay 2004 Stereo 11Two-view geometryC1C2Epipolar lineNot necessarily alonga row of the imagep•The epipolar geometry is defined by the origins of thecamera coordinate frames, the scene point P, and thelocations of the image planesMay 2004 Stereo 12C,C’,x,x’ and X are coplanarEpipolar geometryMay 2004 Stereo 13What if only C,C’,x are known?Epipolar GeometryMay 2004 Stereo 14All points on  project on l and l’Epipolar GeometryMay 2004 Stereo 15Family of planes  and lines l and l’Intersection in e and e’Epipolar GeometryMay 2004 Stereo 16epipoles e,e’= intersection of baseline with image plane= projection of projection center in other image= vanishing point of camera motion directionan epipolar plane = plane containing baseline (1-D family)an epipolar line = intersection of epipolar plane with image(always come in corresponding pairs)Epipolar geometryMay 2004 Stereo 17Epipolar geometry• Epipolar Plane• Epipoles• Epipolar Lines• BaselineC1C2May 2004 Stereo 18Epipolar constraint• Potential matches for p have to lie on the corresponding epipolar line l’• Potential matches for p’ have to lie on the corresponding epipolar line lMay 2004 Stereo 19Example: converging camerasMay 2004 Stereo 20Example: motion parallel with image planeMay 2004 Stereo 21Example: forward motionee’May 2004 Stereo 22Trinocular epipolar constraintMay 2004 Stereo 23Basic approach to stereo vision• Find features of interest in N image views– The “correspondence problem”• Triangulate– A method to measure distance and direction by forming a triangleand using trigonometry• Reconstruct object/scene depth– From dense points– From sparse pointsMay 2004 Stereo 24Step 1: The correspondence problem• Given a “point” in one image, find the location of that samepoint in a second image (and maybe third, and fourth, …)pA search problem: Given point p in the left image, where in the rightimage should we search for a corresponding point?p’p’p’p’p’Sounds easy, huh?May 2004 Stereo 25Correspondence problemRight imageLeft image• What is a point?• How do we compare points in different images? (Similarity measure)May 2004 Stereo 26Correspondence problemLeft imageRight imageMay 2004 Stereo 27The correspondence problem• A classically difficult problem in computer vision– Is every point visible in both images?– Do we match points or regions or …?– Are corresponding (L-R) image regions similar?• Correspondence is easiest when the depth is largecompared with the camera baseline distance– Because the cameras then have about the same viewpoint– But…• Two classes of stereo correspondence algorithms:– Feature based (sparse) – corners, edges, lines, …– Correlation based (dense) How large a window of support to use?May 2004 Stereo 28Multiple views• What do you need to know in order to calculate the depth(or location) of the point that causes p and p' ?C1C2pp• Values of p = (u, v) and p = (u, v)• Locations of C1 and C2 (full extrinsic parameters)– Rigid transformation between C1 and C2• Intrinsic parameters of C1 and C2May 2004 Stereo 29Duality: Calibration and stereo• Given calibrated cameras,we can find depth ofpoints• Given correspondingpoints, we can calibratethe camerasC1C2C1C2May 2004 Stereo 30Example: Extrinsic parameters from 3 pointsC1C21 known point2 known points3 known pointsIn this case, we know the point correspondences and the point distances.If we only know the correspondences, we’ll need at least five pointsMay 2004 Stereo 31The geometry of multiple views• Epipolar Geometry– The Essential Matrix– The Fundamental Matrix• The Trifocal Tensor• The Quadrifocal TensorBaselinecc’May 2004 Stereo 32Epipolar geometry• Epipolar Plane• Epipoles• Epipolar Lines• BaselineC1C2May 2004 Stereo 33Epipolar constraint• Potential matches for p have to lie on the correspondingepipolar line l’• Potential matches for p’ have to lie on the correspondingepipolar line lMay 2004 Stereo 34Epipolar lines exampleMay 2004 Stereo 35Matrix form of cross product• The cross product of two vectors is a third vector,perpendicular to the others (right hand rule) =122131132332babababababababaaaaaa =0001213230)(0)(==babbaa[]ba=May 2004 Stereo 36ppCase 1: Calibrated cameraOOPOPOpO PO pOOOp · (OO  O p ) = ?Op · (OO  O p ) = 0[ R t ] – rigid trans. from O to Op · (t  Rp ) = 0This


View Full Document

UCSB ECE 181B - Introduction to Computer Vision

Download Introduction to Computer Vision
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Introduction to Computer Vision and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Introduction to Computer Vision 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?