DOC PREVIEW
CORNELL CS 6670 - Creating Full View Panoramic Image Mosaics and Environment Maps

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Creating Full View Panoramic Image Mosaics and Environment MapsRichard Szeliski and Heung-Yeung ShumMicrosoft ResearchAbstractThis paper presents a novel approach to creating full viewpanoramicmosaics from image sequences. Unlike current panoramic stitchingmethods, which usually require pure horizontal camera panning,our system does not require any controlled motions or constraintson how the images are taken (as long as there is no strong motionparallax). For example, images taken from a hand-held digital cam-era can be stitched seamlessly into panoramic mosaics. Because werepresent our image mosaics using a set of transforms, there are nosingularity problems such as those existing at the top and bottomof cylindrical or spherical maps. Our algorithm is fast and robustbecause it directly recovers 3D rotations instead of general 8 pa-rameter planar perspective transforms. Methods to recover camerafocal length are also presented. We also present an algorithm forefficiently extracting environment maps from our image mosaics.By mapping the mosaic onto an artibrary texture-mapped polyhe-dron surrounding the origin, we can explore the virtual environmentusing standard 3D graphics viewers and hardware without requiringspecial-purpose players.CR Categories and Subject Descriptors: I.3.3 [Computer Graph-ics]: Picture/Image Generation - Viewing Algorithms; I.3.4 [ImageProcessing]: Enhancement - Registration.Additional Keywords: full-view panoramic image mosaics, en-vironment mapping, virtual environments, image-based rendering.1 IntroductionImage-based rendering is a popular way to simulate a visually richtele-presence or virtual reality experience. Instead of building andrendering a complete 3D model of the environment, a collection ofimages is used to render the scene while supporting virtual cam-era motion. For example, a single cylindrical image surroundingthe viewer enables the user to pan and zoom inside an environmentcreated from real images [4, 13]. More powerful image-based ren-dering systems can be built by adding a depth map to the image[3, 13], or using a larger collection of images [3, 6, 11].Inthispaper, wefocusonimage-basedrenderingsystemswithoutany depth information, i.e., those which only support user panning,rotation, and zoom. Most of the commercial products based on thisidea (such as QuickTime VR [22] and Surround Video [23]) usecylindrical images with a limited vertical field of view, althoughnewer systems support full spherical maps (e.g., PhotoBubble [24],Infinite Pictures [25], and RealVR [26]).A number of techniques have been developed for capturingpanoramic images of real-worldscenes(for references on computer-generatedenvironmentmaps,see[7]). Onewayistorecordanimageonto a long film strip using a panoramic camera to directly capture acylindrical panoramic image [14]. Another way is to use a lens witha very large field of view such as a fisheye lens. Mirrored pyramidsand parabolic mirrors can also be used to directly capture panoramicimages [27, 28].A less hardware-intensive method for constructing full viewpanoramas is to take many regular photographic or video imagesin order to cover the whole viewing space. These images must thenbe aligned and composited into complete panoramic images usingan image mosaic or “stitching”algorithm [12, 17, 9, 4,13, 18]. Moststitching systems require a carefully controlled cameramotion (purepan), and only produce cylindrical images [4, 13]. In this paper, weshow how uncontrolled 3D camera rotation can be used.The case of general camera rotation has been studied previously[12, 9, 18], using an 8-parameter planar perspective motion model.By contrast, our algorithm uses a 3-parameter rotational motionmodel, whichis more robust since it has fewer unknowns. Since thisalgorithm requires knowing the camera’s focal length, we developa method for computing an initial focal length estimate from a setof 8-parameter perspective registrations. We also investigate howto close the “gap” (or “overlap”) due to accumulated registrationerrors after a complete panoramic sequence has been assembled.To demonstrate the advantages of our algorithm, we apply it to asequence of images taken with a handheld digital camera.In our work, we represent our mosaic by a set of transformations.Each transformation corresponds to one image frame in the inputimage sequence and represents the mapping between image pixelsand viewing directions in the world, i.e., it represents the cameramatrix [5]. During the stitching process, our approach makes nocommitment to the final output representation (e.g. spherical orcylindrical), which allows us to avoid the singularities associatedwith such representations.Once a mosaic has been constructed, it can, of course, be mappedinto cylindrical or spherical coordinates, and displayed using a spe-cial purpose viewer [4]. In this paper, we argue that such special-ized representations arenot necessary, and represent just a particularchoice of geometry and texture coordinate embedding. Instead, weshowhow to convertour mosaic to an environment map [7], i.e., howto map our mosaic onto any texture-mapped polyhedron surround-ing the origin. This allows us to use standard 3D graphics APIs and3D model formats, and to use 3D graphics accelerators for texturemapping.The remainder of our paper is structured as follows. Sections2 and 3 review our algorithms for panoramic mosaic constructionusing cylindrical coordinates and general perspective transforms.Section 4 describes our novel direct rotation recovery algorithm.Section 5 presents ourtechnique for estimating thefocal length fromperspective registrations. Section 6 discusses how to eliminate the“gap” in a panorama due to accumulated registration errors. Section7 presents our algorithm for projecting our panoramas onto texture-mapped 3D models (environment maps). We close with a discussionand a description of ongoing and future work.2 Cylindrical and spherical panoramasCylindrical panoramas are commonly used because of their easeof construction. To build a cylindrical panorama, a sequence ofimages is taken by a camera mounted on a leveled tripod. If thecamera focal length or field of view is known, each perspectiveimage can be warped into cylindrical coordinates. Figure 1a showstwo overlapping cylindrical images—notice how horizontal linesbecome curved.To build a cylindrical panorama, we map world coordinates p =(X, Y, Z) to 2D cylindrical screen coordinates (θ, v) usingθ


View Full Document

CORNELL CS 6670 - Creating Full View Panoramic Image Mosaics and Environment Maps

Download Creating Full View Panoramic Image Mosaics and Environment Maps
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Creating Full View Panoramic Image Mosaics and Environment Maps and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Creating Full View Panoramic Image Mosaics and Environment Maps 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?