DOC PREVIEW
Princeton COS 598B - View Interpolation for Image Synthesis

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1View Interpolation for Image SynthesisShenchang Eric Chen, Lance WilliamsApple Computer, Inc.ABSTRACTImage-space simplifications have been used to acceleratethe calculation of computer graphic images since the dawn ofvisual simulation. Texture mapping has been used to provide ameans by which images may themselves be used as displayprimitives. The work reported by this paper endeavors to carrythis concept to its logical extreme by using interpolated im-ages to portray three-dimensional scenes. The special-effectstechnique of morphing, which combines interpolation of tex-ture maps and their shape, is applied to computing arbitrary in-termediate frames from an array of prestored images. If the im-ages are a structured set of views of a 3D object or scene, inter-mediate frames derived by morphing can be used to approximateintermediate 3D transformations of the object or scene. Usingthe view interpolation approach to synthesize 3D scenes hastwo main advantages. First, the 3D representation of the scenemay be replaced with images. Second, the image synthesis timeis independent of the scene complexity. The correspondencebetween images, required for the morphing method, can be pre-determined automatically using the range data associated withthe images. The method is further accelerated by a quadtree de-composition and a view-independent visible priority. Our ex-periments have shown that the morphing can be performed atinteractive rates on today’s high-end personal computers. Po-tential applications of the method include virtual holograms, awalkthrough in a virtual environment, image-based primitivesand incremental rendering. The method also can be used togreatly accelerate the computation of motion blur and softshadows cast by area light sources.CR Categories and Subject Descriptors: I.3.3[Computer Graphics]: Picture/Image Generation; I.3.7[Computer Graphics]: Three-Dimensional Graphics and Real-ism.Additional Keywords: image morphing, interpolation,virtual reality, motion blur, shadow, incremental rendering,real-time display, virtual holography, motion compensation.1 INTRODUCTIONGenerating a large number of images of an environmentfrom closely spaced viewpoints is a very useful capability. Atraditional application is a flight in the cabin of an aircraftsimulator, whereas the contemporary model is perhaps a walkthrough a virtual environment; in both cases the same scene isdisplayed from the view of a virtual camera controlled by theuser. The computation of global illumination effects, such asshadows, diffuse and specular inter-reflections, also requires alarge number of visibility calculations. A typical approach tothis problem is to rely on the computer to repetitively renderthe scene from different viewpoints. This approach has two ma-jor drawbacks. First, real-time rendering of complex scenes iscomputationally expensive and usually requires specializedgraphics hardware. Second, the rendering time is usually notconstant and is dependent on the scene complexity. This prob-lem is particularly critical in simulation and virtual reality ap-plications because of the demand for real-time feedback. Sincescene complexity is potentially unbounded, the second prob-lem will always exist regardless of the processing power of thecomputer.A number of approaches have been proposed to address thisproblem. Most of these approaches use a preprocess to computea subset of the scene visible from a specified viewing re-gion[AIRE91, TELL92]. Only the potentially visible objectsare processed in the walkthrough time. This approach does notcompletely solve the problem because there may be viewingregions from which all objects are visible. Greene andKass[GREE93] developed a method to approximate the visibil-ity at a location from adjacent environment maps. The envi-ronment maps are Z-buffered images rendered from a set of dis-crete viewpoints in 3D space. Each environment map shows acomplete view of the scene from a point. An environment mapcan take the form of a cubic map, computed by rendering a cubeof 90˚ views radiating from that point [GREE86]. The environ-ment maps are pre-computed and stored with viewpoints ar-ranged in a structured way, such as a 3D lattice. An image from anew viewpoint can be generated by re-sampling the environ-ment maps stored in adjacent locations. The re-sampling pro-cess involves rendering the pixels in the environment maps as3D polygons from the new viewpoint. The advantage of thisapproach is that the rendering time is proportional to the envi-ronment map resolutions and is independent of the scene com-plexity. However, this method requires Z-buffer hardware torender a relatively large number of polygons interactively, afeature still not available on most low-end computers.This paper presents a fast method for generating intermedi-ate images from images stored at nearby viewpoints. Themethod has advantages similar to those of Greene and Kass’method. The generation of a new image is independent of thescene complexity. However, instead of drawing every pixel as a3D polygon, our method uses techniques similar to those usedin image morphing[BEIE92]. Adjacent images are “morphed” tocreate a new image for an in-between viewpoint. The morphingmakes use of pre-computed correspondence maps and, therefore,is very efficient. Our experiments with the new method haveshown that it can be performed at interactive rates on inexpen-2sive personal computers without specialized hardware.The new method is based on the observation that a sequenceof images from closely spaced viewpoints is highly coherent.Most of the adjacent images in the sequence depict the same ob-jects from slightly different viewpoints. Our method uses thecamera’s position and orientation and the range data of the im-ages to determine a pixel-by-pixel correspondence between im-ages automatically. The pairwise correspondence between twosuccessive images can be pre-computed and stored as a pair ofmorph maps. Using these maps, corresponding pixels are in-terpolated interactively under the user’s control to create in-be-tween images. Pixel correspondence can be established if range data andthe camera transformation are available. For synthetic images,range data and the camera transformation are easily obtainable.For natural images, range data can be acquired from a rangingcamera [BESL88], computed by photogrammetry [WOLF83], ormodeled by a human artist [WILL90]. The camera transforma-tion can be found if the


View Full Document

Princeton COS 598B - View Interpolation for Image Synthesis

Documents in this Course
Lecture

Lecture

117 pages

Lecture

Lecture

50 pages

Load more
Download View Interpolation for Image Synthesis
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view View Interpolation for Image Synthesis and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view View Interpolation for Image Synthesis 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?