DOC PREVIEW
Princeton COS 598B - Conservative Volumetric Visibility with Occluder Fusion

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Conservative Volumetric Visibility with Occluder FusionGernot Schaufler Julie Dorsey Xavier Decoret Franc¸ois X. SillionLaboratory for Computer Science iMAGISMassachusetts Institute of Technology GRAVIR/IMAG — INRIAAbstractVisibility determination is a key requirement in a wide range ofgraphics algorithms. This paper introduces a new approach to thecomputation of volume visibility, the detection of occluded portionsof space as seen from a given region. The method is conservativeand classifies regions as occluded only when they are guaranteed tobe invisible. It operates on a discrete representation of space anduses the opaque interior of objects as occluders. This choice of oc-cluders facilitates their extension into adjacent opaque regions ofspace, in essence maximizing their size and impact. Our methodefficiently detects and represents the regions of space hidden bysuch occluders. It is the first one to use the property that occluderscan also be extended into empty space provided this space is itselfoccluded from the viewing volume. This proves extremely effec-tive for computing the occlusion by a set of occluders, effectivelyrealizing occluder fusion. An auxiliary data structure representsocclusion in the scene and can then be queried to answer volumevisibility questions. We demonstrate the applicability to visibilitypreprocessing for real-time walkthroughs and to shadow-ray accel-eration for extended light sources in ray tracing, with significantacceleration in both cases.1 IntroductionDetermining visibility is central in many computer graphics algo-rithms. If visibility information were available in advance, scan-line renderers would not need to rasterize hidden geometry, andray-tracers could avoid tracing shadow rays from points in shadowand testing objects that could not be hit. However, computing andstoring all possible view configurations for a scene — the aspectgraph [20] — is impractical for complex scenes. Even calculatingall the visual events in a scene has very high complexity [9] andposes numerical stability problems.It is generally easier to conservatively overestimate the set of po-tentially visible objects (PVS [1, 26]) for a certain region of space(referred to as a “viewcell” throughout this paper). While effectivemethods exist to detect occlusions in indoor scenes [1, 26] and ter-rain models [24], in more general types of complex scenes previousapproaches [4, 6, 21] consider single convex occluders only to de-termine objects, or portions of space, that are completely hiddenfrom the viewcell. This is known as volume visibility.In many cases, objects are hidden due to the combination ofmany, not necessarily convex, occluders. This situation is exac-erbated by the lack of large polygons in today’s finely tessellatedmodels. Figure 7 in Section 4.3 compares the number of occlusionsdetected using single convex occluders to the number detected withour method. Combining the effect of multiple, arbitraryoccluders iscomplicated by the many different kinds of visual events that occurbetween a set of objects [9] and by various geometric degeneracies.As a new solution to these problems, this paper proposes to cal-culate volume visibility on a conservative discretization of space.Occlusion is explicitly represented in this discretization and canbe queried to retrieve visibility information for arbitrary scene ob-jects — either static, dynamic or newly added.We use opaque regions of space as blockers and automaticallyderive them from the scene description instead of expecting largeconvex occluders to be present in the scene. Our representation de-couples the scene complexity from the accuracy and computationalcomplexity at which visibility is resolved.We show that hidden regions of space are valid blockers and thatany opaque blocker can be extended into such regions of space.This effectively combines — fuses [32] — one blocker with all theother blockers that have caused this region to be occluded and re-sults in a dramatic improvement in the occlusions detected. Collec-tions of occluders need not be connected or convex.The rest of the paper is organized as follows. In the next section,we review previous approaches to visibility computation with spe-cial emphasis on volume visibility methods. Next, we describe ourapproach in 2D and then extend it to 3D and 2 1/2 D. We presentresults for PVS computation and reducing the number of shadowrays in ray-tracing. We conclude with a discussion of our resultsand suggestions for future work.2 Previous WorkThe central role of visibility has resulted in many previously pub-lished approaches. We classify them into the following three cat-egories: exact, point-sampled and conservative visibility computa-tions and focus the discussion on volume visibility approaches. Ex-amples of exact visibility representations are the aspect graph [20]or the visibility skeleton [9] and exact shadow boundaries [3, 8, 23,27]. As mentioned above, they are impractical for complex scenes.Point-sampling algorithms calculate visibility up to the accu-racy of the display resolution [5, 7, 13]. One sample ray is sentinto the scene and the obtained visible surface is reused over anarea (e.g. a pixel or solid angle on the hemisphere). Today’s mostwidely used approach is a hardware-accelerated z-buffer [2] or itsvariants, the hierarchical z-buffer [14] and hierarchical occlusionmaps [32]. Visibility results obtained from these algorithms can-not be extended to volume visibility without introducing error. Forvolume visibility, projections are not feasible, as no single center ofprojection is appropriate.To cope with the complexity of today’s models, researchers haveinvestigated conservative subsets of the hidden scene portion. AireyPermission to make digital or hard copies of part or all of this work or personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. SIGGRAPH 2000, New Orleans, LA USA © ACM 2000 1-58113-208-5/00/07 ...$5.00 229et al. [1] and Teller et al. [26, 28] propose visibility preprocessingfor indoor scenes. They identify objects that are visible throughsequences of portals. Yagel et al. [31] apply similar ideas in 2D forvisibility in caves.


View Full Document

Princeton COS 598B - Conservative Volumetric Visibility with Occluder Fusion

Documents in this Course
Lecture

Lecture

117 pages

Lecture

Lecture

50 pages

Load more
Download Conservative Volumetric Visibility with Occluder Fusion
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Conservative Volumetric Visibility with Occluder Fusion and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Conservative Volumetric Visibility with Occluder Fusion 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?