DOC PREVIEW
Berkeley COMPSCI 294 - A Paraperspective Factorization Method for Shape and Motion Recovery

This preview shows page 1-2-3-4 out of 13 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

206 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 19, NO. 3, MARCH 1997A Paraperspective Factorization Method forShape and Motion RecoveryConrad J. Poelman and Takeo Kanade, Fellow, IEEEAbstract—The factorization method, first developed by Tomasi and Kanade, recovers both the shape of an object and its motionfrom a sequence of images, using many images and tracking many feature points to obtain highly redundant feature positioninformation. The method robustly processes the feature trajectory information using singular value decomposition (SVD), takingadvantage of the linear algebraic properties of orthographic projection. However, an orthographic formulation limits the range ofmotions the method can accommodate. Paraperspective projection, first introduced by Ohta, is a projection model that closelyapproximates perspective projection by modeling several effects not modeled under orthographic projection, while retaining linearalgebraic properties. Our paraperspective factorization method can be applied to a much wider range of motion scenarios, includingimage sequences containing motion toward the camera and aerial image sequences of terrain taken from a low-altitude airplane.Index Terms—Motion analysis, shape recovery, factorization method, three-dimensional vision, image sequence analysis, singularvalue decomposition.—————————— ✦ ——————————1INTRODUCTIONECOVERING the geometry of a scene and the motion ofthe camera from a stream of images is an importanttask in a variety of applications, including navigation, ro-botic manipulation, and aerial cartography. While this ispossible in principle, traditional methods have failed toproduce reliable results in many situations [2].Tomasi and Kanade [13], [14] developed a robust and ef-ficient method for accurately recovering the shape and mo-tion of an object from a sequence of images, called the fac-torization method. It achieves its accuracy and robustness byapplying a well-understood numerical computation, thesingular value decomposition (SVD), to a large number ofimages and feature points, and by directly computingshape without computing the depth as an intermediatestep. The method was tested on a variety of real and syn-thetic images, and was shown to perform well even fordistant objects, where traditional triangulation-based ap-proaches tend to perform poorly.The Tomasi-Kanade factorization method, however, as-sumed an orthographic projection model. The applicability ofthe method is therefore limited to image sequences createdfrom certain types of camera motions. The orthographicmodel contains no notion of the distance from the camera tothe object. As a result, shape reconstruction from image se-quences containing large translations toward or away fromthe camera often produces deformed object shapes, as themethod tries to explain the size differences in the images bycreating size differences in the object. The method also sup-plies no estimation of translation along the camera’s opticalaxis, which limits its usefulness for certain tasks.There exist several perspective approximations whichcapture more of the effects of perspective projection whileremaining linear. Scaled orthographic projection, sometimesreferred to as “weak perspective” [5], accounts for the scalingeffect of an object as it moves towards and away from thecamera. Paraperspective projection, first introduced by Ohta[6] and named by Aloimonos [1], accounts for the scalingeffect as well as the different angle from which an object isviewed as it moves in a direction parallel to the image plane.In this paper, we present a factorization method basedon the paraperspective projection model. The paraperspec-tive factorization method is still fast, and robust with re-spect to noise. It can be applied to a wider realm of situa-tions than the original factorization method, such as se-quences containing significant depth translation or con-taining objects close to the camera, and can be used in ap-plications where it is important to recover the distance tothe object in each image, such as navigation.We begin by describing our camera and world referenceframes and introduce the mathematical notation that we use.We review the original factorization method as defined in[13], presenting it in a slightly different manner in order tomake its relation to the paraperspective method more appar-ent. We then present our paraperspective factorizationmethod, followed by a description of a perspective refine-ment step. We conclude with the results of several experi-ments which demonstrate the practicality of our system.2PROBLEM DESCRIPTIONIn a shape-from-motion problem, we are given a sequenceof F images taken from a camera that is moving relative toan object. Assume for the time being that we locate Pprominent feature points in the first image, and track these0162-8828/97/$10.00 © 1997 IEEE————————————————• C.J. Poelman is with the Satellite Assessment Center (WSAT), USAFPhillips Laboratory, Albuquerque, NM 87117-5776. E-mail: [email protected].• T. Kanade is with the School of Computer Science, Carnegie Mellon Uni-versity, 5000 Forbes Avenue, Pittsburgh, PA 15213-3890. E-mail: [email protected] received June 15, 1994; revised Jan. 10, 1996. Recommended for accep-tance by S. Peleg.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number P97001.RPOELMAN AND KANADE: A PARAPERSPECTIVE FACTORIZATION METHOD FOR SHAPE AND MOTION RECOVERY 207points from each image to the next, recording the coordi-nates uvfp fp,ej of each point p in each image f. Each featurepoint p that we track corresponds to a single world point,located at position sp in some fixed world coordinate sys-tem. Each image f was taken at some camera orientation,which we describe by the orthonormal unit vectors if, jf, andkf, where if and jf correspond to the x and y axes of the cam-era’s image plane, and kf points along the camera’s line ofsight. We describe the position of the camera in each frame fby the vector tf indicating the camera’s focal point. Thisformulation is illustrated in Fig. 1.Fig. 1. Coordinate system.The result of the feature tracker is a set of P feature pointcoordinates uvfp fp,ej for each of the F frames of the imagesequence. From this information, our goal is to estimate theshape of the object as


View Full Document

Berkeley COMPSCI 294 - A Paraperspective Factorization Method for Shape and Motion Recovery

Documents in this Course
"Woo" MAC

"Woo" MAC

11 pages

Pangaea

Pangaea

14 pages

Load more
Download A Paraperspective Factorization Method for Shape and Motion Recovery
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Paraperspective Factorization Method for Shape and Motion Recovery and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Paraperspective Factorization Method for Shape and Motion Recovery 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?