Unformatted text preview:

Light Field RenderingMarc Levo y and Pat HanrahanComputer Science DepartmentStanford UniversityAbstractA number of techniques have been proposed for flyingthrough scenes by redisplaying previously rendered or digitizedviews. Techniques have also been proposed for interpolatingbetween views by warping input images, using depth informationor correspondences between multiple images. In this paper, wedescribe a simple and robust method for generating new viewsfrom arbitrary camera positions without depth information or fea-ture matching, simply by combining and resampling the availableimages. The key to this technique lies in interpreting the inputimages as 2D slices of a 4D function - the light field. This func-tion completely characterizes the flow of light through unob-structed space in a static scene with fixed illumination.We describe a sampled representation for light fields thatallows for both efficient creation and display of inward and out-ward looking views. We hav e created light fields from largearrays of both rendered and digitized images. The latter areacquired using a video camera mounted on a computer-controlledgantry. Once a light field has been created, new views may beconstructed in real time by extracting slices in appropriate direc-tions. Since the success of the method depends on having a highsample rate, we describe a compression system that is able tocompress the light fields we have generated by more than a factorof 100:1 with very little loss of fidelity. We also address the issuesof antialiasing during creation, and resampling during slice extrac-tion.CR Categories: I.3.2 [Computer Graphics]: Picture/Image Gener-ation — Digitizing and scanning, Viewing algorithms; I.4.2 [Com-puter Graphics]: Compression — Approximate methodsAdditional keywords: image-based rendering, light field, holo-graphic stereogram, vector quantization, epipolar analysis1. IntroductionTraditionally the input to a 3D graphics system is a sceneconsisting of geometric primitives composed of different materialsand a set of lights. Based on this input specification, the renderingsystem computes and outputs an image. Recently a new approachto rendering has emerged: image-based rendering. Image-basedrendering systems generate different views of an environmentfrom a set of pre-acquired imagery. There are several advantagesto this approach:Address: Gates Computer Science Building 3B levo [email protected] University [email protected], CA 94305 http://www-graphics.stanford.edu• The display algorithms for image-based rendering requiremodest computational resources and are thus suitable for real-time implementation on workstations and personal computers.• The cost of interactively viewing the scene is independent ofscene complexity.• The source of the pre-acquired images can be from a real orvirtual environment, i.e. from digitized photographs or fromrendered models. In fact, the two can be mixed together.The forerunner to these techniques is the use of environ-ment maps to capture the incoming light in a texture map[Blinn76, Greene86]. An environment map records the incidentlight arriving from all directions at a point. The original use ofenvironment maps was to efficiently approximate reflections ofthe environment on a surface. However, environment maps alsomay be used to quickly display any outward looking view of theenvironment from a fixed location but at a variable orientation.This is the basis of the Apple QuickTimeVR system [Chen95]. Inthis system environment maps are created at key locations in thescene. The user is able to navigate discretely from location tolocation, and while at each location continuously change the view-ing direction.The major limitation of rendering systems based on envi-ronment maps is that the viewpoint is fixed. One way to relax thisfixed position constraint is to use view interpolation [Chen93,Greene94, Fuchs94, McMillan95a, McMillan95b, Narayanan95].Most of these methods require a depth value for each pixel in theenvironment map, which is easily provided if the environmentmaps are synthetic images. Given the depth value it is possible toreproject points in the environment map from different vantagepoints to warp between multiple images. The key challenge inthis warping approach is to "fill in the gaps" when previouslyoccluded areas become visible.Another approach to interpolating between acquiredimages is to find corresponding points in the two [Laveau94,McMillan95b, Seitz95]. If the positions of the cameras areknown, this is equivalent to finding the depth values of the corre-sponding points. Automatically finding correspondences betweenpairs of images is the classic problem of stereo vision, and unfor-tunately although many algorithms exist, these algorithms arefairly fragile and may not always find the correct correspon-dences.In this paper we propose a new technique that is robust andallows much more freedom in the range of possible views. Themajor idea behind the technique is a representation of the lightfield, the radiance as a function of position and direction, inregions of space free of occluders (free space). In free space, thelight field is a 4D, not a 5D function. An image is a two dimen-sional slice of the 4D light field. Creating a light field from a setof images corresponds to inserting each 2D slice into the 4D lightfield representation. Similarly, generating new views correspondsto extracting and resampling a slice.Permission to make digital or hard copies of part or all of this work or personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. © 1996 ACM-0-89791-746-4/96/008...$3.50 31Generating a new image from a light field is quite differentthan previous view interpolation approaches. First, the new imageis generally formed from many different pieces of the originalinput images, and need not look like any of them. Second, nomodel information, such as depth values or image correspon-dences, is needed to extract the image values. Third, image gener-ation involves only resampling, a simple linear process.This representation of the light field is similar to the epipo-lar volumes used in computer vision [Bolles87] and to


View Full Document

TAMU CSCE 641 - p31-levoy

Documents in this Course
Load more
Download p31-levoy
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view p31-levoy and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view p31-levoy 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?