DOC PREVIEW
CORNELL CS 6670 - Lecture 22: Computational photography

This preview shows page 1-2-3-4-26-27-28-54-55-56-57 out of 57 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 57 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Lecture 22: Computational photographyAnnouncementsBRDF’s can be incredibly complicated…Shape from shadingPhotometric stereoSolving the equationsMore than three lightsExampleComputing light source directionsDepth from normalsSlide 11Slide 12LimitationsFinding the direction of the light sourceApplication: Detecting composite photosExample-based Photometric StereoShiny thingsSlide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Virtual viewsVelvetVirtual ViewsBrushed FurSlide 29Slide 30Slide 31Questions?Computational PhotographyThe ultimate cameraSlide 35Creating the ultimate cameraNoise reductionField of viewImproving resolution: Gigapixel imagesImproving resolution: super resolutionIntuition (slides from Yossi Rubner & Miki Elad)Slide 42Slide 43Slide 44Slide 45IntuitionSlide 47Handling more general 2D motionsSuper-resolutionHow does this work? [Baker & Kanade, 2002]Limits of super-resolution [Baker & Kanade, 2002]Dynamic RangeHDR images — merge multiple inputsHDR images — mergedCamera is not a photometer!Slide 56Capture and composite several photosLecture 22: Computational photographyCS6670: Computer VisionNoah Snavelyphotomatix.comAnnouncements•Final project midterm reports due on Tuesday to CMS by 11:59pmBRDF’s can be incredibly complicated…Shape from shadingSuppose You can directly measure angle between normal and light source•Not quite enough information to compute surface shape•But can be if you add some additional info, for example–assume a few of the normals are known (e.g., along silhouette)–constraints on neighboring normals—“integrability” –smoothness•Hard to get it to work well in practice–plus, how many real objects have constant albedo?Photometric stereoNL1L2VL3Can write this as a matrix equation:Solving the equationsMore than three lightsGet better results by using more lightsWhat’s the size of LTL?Least squares solution:Solve for N, kd as beforeExampleRecovered albedoRecovered normal fieldForsyth & Ponce, Sec. 5.4Computing light source directionsTrick: place a chrome sphere in the scene•the location of the highlight tells you where the light source isDepth from normalsForsyth & Ponce, Sec. 5.4What we have What we wantDepth from normalsGet a similar equation for V2•Each normal gives us two linear constraints on z•compute z values by solving a matrix equationV1V2NExampleLimitationsBig problems•doesn’t work for shiny things, semi-translucent things•shadows, inter-reflectionsSmaller problems•camera and lights have to be distant•calibration requirements–measure light source directions, intensities–camera response functionNewer work addresses some of these issuesSome pointers for further reading:•Zickler, Belhumeur, and Kriegman, "Helmholtz Stereopsis: Exploiting Reciprocity for Surface Reconstruction." IJCV, Vol. 49 No. 2/3, pp 215-227. •Hertzmann & Seitz, “Example-Based Photometric Stereo: Shape Reconstruction with General, Varying BRDFs.” IEEE Trans. PAMI 2005Finding the direction of the light sourceP. Nillius and J.-O. Eklundh, “Automatic estimation of the projected light source direction,” CVPR 2001Application: Detecting composite photosFake photoReal photo?Aaron HertzmannUniversity of TorontoExample-based Photometric StereoSteven M. SeitzUniversity of WashingtonShiny things“Orientation consistency”same surface normalVirtual viewsVelvetVirtual ViewsBrushed FurBrushed FurVirtual ViewsQuestions?3-minute breakComputational PhotographyImage from Durand & Freeman’s MIT Course on Computational PhotographyToday’s reading•Szeliski Chapter 9The ultimate cameraWhat does it do?The ultimate cameraInfinite resolutionInfinite zoom controlDesired object(s) are in focusNo noiseNo motion blurInfinite dynamic range (can see dark and bright things)...Creating the ultimate cameraThe “analog” camera has changed very little in >100 yrs•we’re unlikely to get there following this pathMore promising is to combine “analog” optics with computational techniques•“Computational cameras” or “Computational photography”This lecture will survey techniques for producing higher quality images by combining optics and computationCommon themes:•take multiple photos•modify the cameraNoise reductionTake several images and average themWhy does this work?Basic statistics: •variance of the mean decreases with n:Field of viewWe can artificially increase the field of view by compositing several photos together (project 2).Improving resolution: Gigapixel imagesA few other notable examples:•Obama inauguration (gigapan.org)•HDView (Microsoft Research)Max Lyons, 2003fused 196 telephoto shotsImproving resolution: super resolutionWhat if you don’t have a zoom lens?DFor a given band-limited image, the Nyquist sampling theorem states that if a uniform sampling is fine enough (D), perfect reconstruction is possible.DIntuition (slides from Yossi Rubner & Miki Elad)41Due to our limited camera resolution, we sample using an insufficient 2D grid2D2D42Intuition (slides from Yossi Rubner & Miki Elad)However, if we take a second picture, shifting the camera ‘slightly to the right’ we obtain:2D2D43Intuition (slides from Yossi Rubner & Miki Elad)Similarly, by shifting down we get a third image:2D2D44Intuition (slides from Yossi Rubner & Miki Elad)And finally, by shifting down and to the right we get the fourth image:2D2D45Intuition (slides from Yossi Rubner & Miki Elad)By combining all four images the desired resolution is obtained, and thus perfect reconstruction is guaranteed.Intuition46473:1 scale-up in each axis using 9 images, with pure global translation between them ExampleWhat if the camera displacement is Arbitrary ? What if the camera rotates? Gets closer to the object (zoom)?Handling more general 2D motions48Super-resolutionBasic idea:•define a destination (dst) image of desired resolution•assume mapping from dst to each input image is known–usually a combination of a 2D motion/warp and an average (point-spread function)–can be expressed as a set of linear constraints–sometimes the mapping is solved for as well•add some form of regularization (e.g., “smoothness assumption”)–can also be expressed using linear constraints–but L1, other nonlinear methods work betterHow does this work? [Baker & Kanade, 2002]Limits of super-resolution [Baker & Kanade, 2002]Performance degrades significantly beyond 4x or soDoesn’t matter how many new images you add•space of possible


View Full Document
Download Lecture 22: Computational photography
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 22: Computational photography and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 22: Computational photography 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?