Unformatted text preview:

6.098 Digital and Computational Photography 6.882 Advanced Computational PhotographyRefocusing & Light FieldsFrédo DurandBill FreemanMIT - EECSFinal projects• Send your slides by noon on Thrusday. • Send final reportWavefrontcodingIs depth of field a blur?• Depth of field is NOT a convolution of the image• The circle of confusion varies with depth • There are interesting occlusion effects• (If you really want a convolution, there is one, but in 4D space…more soon)From Macro PhotographyWavefront coding• CDM-Optics, U of Colorado, Boulder• The worst title ever: "A New Paradigm for Imaging Systems", Cathey and Dowski, Appl. Optics, 2002 • Improve depth of field using weird optics & deconvolution• http://www.cdm-optics.com/site/publications.phpWavefront coding• Idea: deconvolution to deblur out of focus regions • Convolution = filter (e.g. blur, sharpen)• Sometimes, we can cancel a convolution by another convolution– Like apply sharpen after blur (kind of)– This is called deconvolution• Best studied in the Fourier domain (of course!)– Convolution = multiplication of spectra– Deconvolution = multiplication by inverse spectrumDeconvolution• Assume we know blurring kernel kf' = f ⊗ k Î F' = F K (in Fourier space)• Invert by: F=F'/K (in Fourier space)• Well-known problem with deconvolution: – Impossible to invert for ω where K(ω)=0– Numerically unstable when K(ω) is smallWavefront coding• Idea: deconvolution to deblur out of focus regions • Problem 1: depth of field blur is not shift-invariant– Depends on depthÎIf depth of field is not a convolution, it's harder to use deconvolution ;-(• Problem 2: Depth of field blur "kills information"– Fourier transform of blurring kernel has lots of zeros– Deconvolution is ill-posedWavefront coding• Idea: deconvolution to deblur out of focus regions • Problem 1: depth of field blur is not shift-invariant• Problem 2: Depth of field blur "kills information"• Solution: change optical system so that– Rays don't converge anymore– Image blur is the same for all depth– Blur spectrum does not have too many zeros• How it's done– Phase plate (wave optics effect, diffraction)– Pretty much bends light– Will do things similar to spherical aberrationsRay versionOther application• Single-image depth sensing– Blur depends A LOT on depth– Passive Ranging Through Wave-Front Coding: Information and Application. Johnson, Dowski, Cathey– http://graphics.stanford.edu/courses/cs448a-06-winter/johnson-ranging-optics00.pdfSingle image depth sensingImportant take-home ideaCoded imaging• What the sensor records is not the image we want, it's been coded (kind of like in cryptography)• Image processing decodes itOther forms of coded imaging• Tomography– e.g. http://en.wikipedia.org/wiki/Computed_axial_tomography– Lots of cool Fourier transforms there• X-ray telescopes & coded aperture – e.g. http://universe.gsfc.nasa.gov/cai/coded_intr.html• Ramesh's motion blur• and to some extend, Bayer mosaicsSee Berthold Horn's coursePlenopticcamera refocusingPlenoptic/light field cameras• Lipmann 1908– "Window to the world"• Adelson and Wang, 1992– Depth computation• Revisited by Ng et al. for refocusingThe PlenopticFunctionBack to the images that surround us• How to describe (and capture) all the possible images around us?The Plenoptic function• [Adelson & Bergen 91] http://web.mit.edu/persci/people/adelson/pub_pdfs/elements91.pdf• From the greek"total"• See alsohttp://www.everything2.com/index.pl?node_id=989303&lastnode_id=1102051Plenoptic function• 3D for viewpoint• 2D for ray direction• 1D for wavelength• 1D for time• can add polarizationFrom McMillan 95Light fieldsIdea• Reduce to outside the convex hull of a scene• For every line in space• Store RGB radiance• Then rendering is just a lookup• Two major publication in 1996: – Light field rendering [Levoy & Hanrahan]• http://graphics.stanford.edu/papers/light/– The Lumigraph [Gortler et al.]• Adds some depth information• http://cs.harvard.edu/~sjg/papers/lumigraph.pdfHow many dimensions for 3D lines ?• 4: e.g. 2 for direction, 2 for intersection with planeTwo-plane parameterization• Line parameterized by intersection with 2 planes– Careful, there are different "isotopes" of such parameterization (slightly different meaning of stuv)Let's make life simpler: 2D• How many dimensions for 2D lines?– Only 2, e.g. y=ax+b <> (a,b)Let's make life simpler: 2D• 2-line parameterizationView?View?• View Î line in Ray space• Kind of cool: ray Î point, and view around point Îline• There is a dualityBack to 3D/4DFrom Gortler et al.Cool visualizationFrom Gortler et al.View = 2D plane in 4D• With various resampling issuesDemo light field viewerReconstruction, antialiasing, depth of fieldSlide by Marc LevoyAperture reconstruction• So far, we have talked about pinhole view • Aperture reconstruction: depth of field, better antiliasingSlide by Marc LevoySmall apertureImage Isaksen et al.Big apertureImage Isaksen et al.Light field sampling[Chai et al. 00, Isaksen et al. 00, Stewart et al. 03]– Light field spectrum as a function of object distance– Slope inversely proportional to depth– http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm– http://portal.acm.org/citation.cfm?id=344779.344929From [Chai et al. 2000]Light field camerasPlenoptic camera• For depth extraction• Adelson & Wang 92 http://www-bcs.mit.edu/people/jyawang/demos/plenoptic/plenoptic.htmlCamera array• Willburn et al. http://graphics.stanford.edu/papers/CameraArray/Camera arrays• http://graphics.stanford.edu/projects/array/MIT version• Jason YangBullet time• Time splice http://www.ruffy.com/frameset.htmRobotic CameraImage Leonard McMillanImage Levoy et al.Flatbed scanner camera• By Jason YangPlenopticcamera refocusingConventional PhotographSlide by Ren Ng.Light Field Photography• Capture the light field inside the camera bodySlide by Ren Ng.Hand-Held Light Field CameraMedium format digital camera Camera in-use16 megapixel sensor Microlens arraySlide by Ren Ng.Light Field in a Single ExposureSlide by Ren Ng.Light Field in a Single ExposureSlide by Ren Ng.Light Field Inside the Camera BodySlide by Ren Ng.Digital RefocusingSlide by Ren Ng.Digital RefocusingSlide by Ren Ng.Digitally stopping-downstopping down = summing only the central portion of each


View Full Document

MIT 6 098 - Refocusing & Light Fields

Download Refocusing & Light Fields
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Refocusing & Light Fields and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Refocusing & Light Fields 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?