DOC PREVIEW
UW-Madison CS 779 - Image Based Rendering

This preview shows page 1-2-17-18-19-36-37 out of 37 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 37 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 37 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 37 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 37 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 37 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 37 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 37 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 37 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Last TimeTodayImage Based Rendering So Far …Improving SamplingSurface Light FieldsSurface Light Field Set-UpSurface Light Field SystemSurface Light Field ResultsSlide 9Surface Light Fields AnalysisSummarySpecial ApplicationsHybrid Image/Geometry for ArchitectureFaçade (Debevec, Taylor, Malik 1996)Façade SummaryPhotogrammetric ModelingView Dependent TexturesView-Dependent TexturesModel-Based StereoSlide 20ExtensionsOther SystemsBaillard & Zisserman 99Rendering FacesOverviewWhy Imaged-Based?Video Rewrite Bregler, Covell, Slaney 1997Video-Rewrite ExampleFacial Expressions From Photographs Pighin, Hecker, Lischinski, Szeliski, Salesin 1998Facial Expressions From PhotographsA Morphable Model Blanz, Vetter 1999A Morphable ModelA Morphable Model (cont)Acquiring the Reflectance Field Debevec, Hawkins, Tchou, Duiker, Sarokin, Sagar 2000Acquiring the Reflectance FieldFaces SummaryNext Time03/11/05 © 2005 University of WisconsinLast Time•Image Based Rendering–Plenoptic function–Light Fields and Lumigraph•NPR Papers: By today•Projects: By next week03/11/05 © 2005 University of WisconsinToday•Image Based Rendering–Surface Light Fields03/11/05 © 2005 University of WisconsinImage Based Rendering So Far …•In terms of samples from the plenoptic function, what has each method captured?–View morphing?–Plenoptic modeling?–Light Field / Lumigraph?r03/11/05 © 2005 University of WisconsinImproving Sampling•We really care about rays that emanate from surfaces•We don’t care about rays in “free space”•For accurate texture, we want control of the resolution on the surface03/11/05 © 2005 University of WisconsinSurface Light Fields•Instead of storing the complete light-field, store only lines emanating from the surface of interest–Parameterize the surface mesh (standard technique)–Choose sample points on the surface–Sample the space of rays leaving the surface from those points–When rendering, look up nearby sample points and appropriate sample rays•Best for rendering complex BRDF models–An example of view dependent texturing03/11/05 © 2005 University of WisconsinSurface Light Field Set-Up03/11/05 © 2005 University of WisconsinSurface Light Field System•Capture with range-scanners and cameras–Geometry and images•Build Lumispheres and compress them–Several compression options, discussed in some detail•Rendering methods–A real-time version exists03/11/05 © 2005 University of WisconsinSurface Light Field Results03/11/05 © 2005 University of WisconsinSurface Light Field ResultsPhotos Renderings03/11/05 © 2005 University of WisconsinSurface Light Fields Analysis•Why doesn’t this solve the photorealistic rendering problem?•How could it be extended?–Precomputed Radiance Transfer – SIGGRAPH 2002 and many papers since03/11/05 © 2005 University of WisconsinSummary•Light-fields capture very dense representations of the plenoptic function–Fields can be stitched together to give walkthroughs–The data requirements are large–Sampling still not dense enough – filtering introduces blurring•Now: Using domain specific knowledge03/11/05 © 2005 University of WisconsinSpecial Applications•If we assume a specific application, many image-based rendering tools can be improved–The Light-Field and Lumigraph assumed the special domain of orbiting small objects•Two applications stand out:–Architecture, because people wish to capture models of cities–Faces, because there is no other good way to do it, and pictures of faces are essential to various fields (movies, advertising, and so on)03/11/05 © 2005 University of WisconsinHybrid Image/Geometry for Architecture•Most buildings:–Are made up of common, simple, architectural elements (boxes, domes…)–Have lots of implicit constraints like parallel lines and right angles•We can exploit the simple geometry and constraints to simplify the image-based rendering problem•Hybrid approaches build simple geometric models from data in images, then texture them with the same images03/11/05 © 2005 University of WisconsinFaçade (Debevec, Taylor, Malik 1996)•Start with a sparse set of images of a building (from one to tens of images)•With an interactive photogrammetric modeling program, a user builds an approximate 3D model of the building•Generate view dependent texture maps with the images•Use model-based stereo to reconstruct additional detail (doorways, window ledges…)•Render from any view (assuming images see all the surfaces)03/11/05 © 2005 University of WisconsinFaçade Summary03/11/05 © 2005 University of WisconsinPhotogrammetric Modeling•User specifies which parametric blocks make up a model, and the constraints between them•User marks edges on the model and corresponding edges in images•The system determines camera locations and model parameters using minimization algorithm•Result: A 3D model of the approximate geometry–The blocks used determine the accuracy of the model–Details can be left out – later stages will catch them03/11/05 © 2005 University of WisconsinView Dependent Textures•The images can be projected onto the 3D model to determine which parts of the images correspond to which parts of the model–Hardware projective texture mapping can make this very fast•More than one image may see a point on the model–Blend the image values–Use weights that favor images from cameras closer to the viewing direction (alpha blending in hardware)•Some points may be seen in no image – use hole filling (or creative photography)03/11/05 © 2005 University of WisconsinView-Dependent Textures•(b) and (d) use textures from images that match the view angle•(c) uses the incorrect photo03/11/05 © 2005 University of WisconsinModel-Based Stereo•Blocks do not capture all the details, such as sunken doorways–This introduces errors in the new renderings–View-dependent texture mapping helps if there is a view close to the right direction•The approximate model gives many major hints to an automatic shape-from-stereo algorithm–Find correspondences between points in different images–Add depth information to the texture03/11/05 © 2005 University of WisconsinModel-Based Stereo03/11/05 © 2005 University of WisconsinExtensions•Surfaces of revolution as blocks–Allows for domes and towers•Recovering lighting and reflectance parameters–Required good geometric model03/11/05 © 2005 University of WisconsinOther Systems•Other


View Full Document

UW-Madison CS 779 - Image Based Rendering

Download Image Based Rendering
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Image Based Rendering and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Image Based Rendering 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?