Modeling Light15-463: Computational PhotographyAlexei Efros, CMU, Fall 2011© Michal HavlikWhat is light?Electromagnetic radiation (EMR) moving along rays in space• R(λ) is EMR, measured in units of power (watts)– λ is wavelengthLight:• Travels far• Travels fast• Travels in straight lines• Interacts with stuff• Bounces off things• Is produced in nature• Has lots of energy--Steve ShaferPoint of observationFigures © Stephen E. Palmer, 2002What do we see?3D world 2D imageWhat do we see?3D world 2D imagePainted backdropOn Simulating the Visual ExperienceJust feed the eyes the right data• No one will know the difference!Philosophy:• Ancient question: “Does the world really exist?”Science fiction:• Many, many, many books on the subject, e.g. slowglass from “Light of Other Days”• Latest take: The MatrixPhysics:• Slowglass might be possible?Computer Science:• Virtual RealityTo simulate we need to know:What does a person see?The Plenoptic FunctionQ: What is the set of all things that we can ever see?A: The Plenoptic Function (Adelson & Bergen)Let’s start with a stationary person and try to parameterize everything that he can see…Figure by Leonard McMillanGrayscale snapshotis intensity of light • Seen from a single view point• At a single time• Averaged over the wavelengths of the visible spectrum(can also do P(x,y), but spherical coordinate are nicer)P(θ,φθ,φθ,φθ,φ)Color snapshotis intensity of light • Seen from a single view point• At a single time• As a function of wavelengthP(θ,φ,λθ,φ,λθ,φ,λθ,φ,λ)A movieis intensity of light • Seen from a single view point• Over time• As a function of wavelengthP(θ,φ,λθ,φ,λθ,φ,λθ,φ,λ,t)Holographic movieis intensity of light • Seen from ANY viewpoint• Over time• As a function of wavelengthP(θ,φ,λθ,φ,λθ,φ,λθ,φ,λ,t,VX,VY,VZ)The Plenoptic Function• Can reconstruct every possible view, at every moment, from every position, at every wavelength• Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! Not bad for a function…P(θ,φ,λθ,φ,λθ,φ,λθ,φ,λ,t,VX,VY,VZ)Sampling Plenoptic Function (top view)Just lookup -- Quicktime VRRayLet’s not worry about time and color:5D• 3D position• 2D directionP(θ,φ,θ,φ,θ,φ,θ,φ,VX,VY,VZ)Slide by Rick Szeliski and Michael CohenSurface CameraNo Change in RadianceLightingHow can we use this?Ray ReuseInfinite line• Assume light is constant (vacuum)4D• 2D direction• 2D position• non-dispersive mediumSlide by Rick Szeliski and Michael CohenOnly need plenoptic surfaceSynthesizing novel viewsSlide by Rick Szeliski and Michael CohenLumigraph / LightfieldOutside convex space4DStuffEmptySlide by Rick Szeliski and Michael CohenLumigraph - Organization 2D position2D directionsθSlide by Rick Szeliski and Michael CohenLumigraph - Organization 2D position2D position2 plane parameterizationsuSlide by Rick Szeliski and Michael CohenLumigraph - Organization 2D position2D position2 plane parameterizationusts,tu,vvs,tu,vSlide by Rick Szeliski and Michael CohenLumigraph - OrganizationHold s,t constantLet u,v varyAn images,tu,vSlide by Rick Szeliski and Michael CohenLumigraph / LightfieldLumigraph - Capture Idea 1• Move camera carefully over s,tplane• Gantry– see Lightfield papers,tu,vSlide by Rick Szeliski and Michael CohenLumigraph - Capture Idea 2• Move camera anywhere• Rebinning– see Lumigraph papers,tu,vSlide by Rick Szeliski and Michael CohenLumigraph - RenderingFor each output pixel• determine s,t,u,v• either• use closest discrete RGB• interpolate near valuessuSlide by Rick Szeliski and Michael CohenLumigraph - RenderingNearest• closest s• closest u• draw itBlend 16 nearest• quadrilinear interpolationsuSlide by Rick Szeliski and Michael CohenStanford multi-camera array• 640 × 480 pixels ×30 fps × 128 cameras• synchronized timing• continuous streaming• flexible arrangementLight field photography using a handheld plenoptic cameraRen Ng, Marc Levoy, Mathieu Brédif,Gene Duval, Mark Horowitz and Pat Hanrahan 2005 Marc LevoyConventional versus light field camera 2005 Marc LevoyConventional versus light field camerauv-plane st-planePrototype camera• 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lensContax medium format camera Kodak 16-megapixel sensorAdaptive Optics microlens array125µ square-sided microlenses 2005 Marc LevoyDigitally stopping-down• stopping down = summing only the central portion of each microlensΣΣ 2005 Marc LevoyDigital refocusing• refocusing = summing windows extracted from several microlensesΣΣ 2005 Marc LevoyExample of digital refocusing 2005 Marc LevoyDigitally moving the observer• moving the observer = moving the window we extract from the microlensesΣΣ 2005 Marc LevoyExample of moving the observer 2005 Marc LevoyMoving backward and forward 2005 Marc LevoyOn sale now: lydra.com3D LumigraphOne row of s,t plane• i.e., hold t constants,tu,v3D LumigraphOne row of s,t plane• i.e., hold t constant• thus s,u,v• a “row of images”su,vby David DeweyP(x,t)2D: ImageWhat is an image?All rays through a point• Panorama?Slide by Rick Szeliski and Michael CohenImageImage plane2D• positionSpherical PanoramaAll light rays through a point form a ponoramaTotally captured in a 2D array -- P(θ,φθ,φθ,φθ,φ)Where is the geometry???See also: 2003 New Years Evehttp://www.panoramas.dk/fullscreen3/f1.htmlOther ways to sample Plenoptic FunctionMoving in time: • Spatio-temporal volume: P(θ,φθ,φθ,φθ,φ,t)• Useful to study temporal changes• Long an interest of artists:Claude Monet, Haystacks studiesSpace-time imagesOther ways to slice theplenoptic function…xytThe “Theatre Workshop” Metaphordesired image(Adelson & Pentland,1996)Painter Lighting DesignerSheet-metalworkerPainter (images)Lighting Designer (environment maps)Show Naimark SF MOMA videohttp://www.debevec.org/Naimark/naimark-displacements.movSheet-metal Worker (geometry)Let surface normals do all the work!… working togetherWant to minimize costEach one does what’s easiest for him• Geometry – big things• Images – detail• Lighting – illumination effectsclever
View Full Document