UMBC CMSC 635 - Real-Time Fur with Precomputed Radiance Transfer

Unformatted text preview:

Real-Time Fur with Precomputed Radiance TransferJohn W. Kloetzli, Jr.∗UMBCFigure 1: Left: Dense, unlit fur (color set by depth) 55000 hairs, 32 shells. Right: Final render of fur in real-time with lighting.AbstractThis paper introduces Precomputed Radiance Transfer (PRT) toshell textures in the context of real-time fur rendering. PRT is amethod which allows static objects to have global illumination ef-fects such as self shadowing and soft shadows while being renderedin real-time. This is done by precomputing radiance on the surfacein a special basis that is chosen to allow reconstruction of correctillumination in arbitrary lighting environments. Shell textures isa technique for rendering complex surface geometry in real-timethrough the use of concentric 3D rings or shells around a model.The shells are transparent everywhere except at the intersection ofthe shell and the microgeometry that is being rendered. We nov-elly apply these two techniques to the problem of rendering fur toproduce real-time rendering of fur with global illumination.Keywords: real-time, fur, precomputed radiance transfer, PRT,microgeometry, microsurfaces1 IntroductionRendering microsurfaces is a difficult task in computer graphics.Because microsurfaces are by definition very high frequency geom-etry, traditional rasterization or ray tracing techniques bog down tothe point of uselesness or are plagued with terrible aliasing artifacts.Surface shading techniques are also not suited to the task becausemicrosurfaces, although very small, are still geometrically visibleto the naked eye, which lighting equations alone are unable to cap-ture.Fur is a perfect example of a microsurface, containing hundredsor thousands of hairs that are very small but certainly individuallyvisible. Still, we need to render fur if we intend to have realistic an-imals in computer graphics. In addition, we need a very fast way torender it if we intend said animals to be in an interactive applicationsuch as a game.Some of the first techniques used to render fur were based on 3Dtextures. Kajiya and Kay [1989] described a ray-tracing methodwhich stored microsurface data in a volume and modeled light scat-tering and attenuation along each ray through the volume. Thismethod produced excellent results, but was far from real-time. In∗e-mail: [email protected], close inspection of the fur revealed what they described asthe ”painter’s illusion”: blotches of color that appear to be micro-geometry from a distance but up close separate into meaninglessblobs. Hypertexture [Perlin and Hoffert 1989] is another volumemethod, but instead of capturing surfaces they record densities be-tween 0 and 1. This gives the effect of ’soft’ objects, with which,by modulating these densities using different functions, they couldproduce a variety of effects, including fur. Although Hypertexturecan model many different things effectively, it doesn’t produce asrealistic fur as other methods. There are implementations of Hyper-texture that run in realtime on todays graphics hardware, but noneof them attempt to compute inter-object shadows that are neededfor realistic microsurfaces.Another method for fur rendering was proposed by Goldman[1997], who was working on rendering fur for the Disney film 101Dalmations. His basic assumption was that the camera was nevergoing to get close enought to a furry object for the geometric prop-erties of the fur to become distinct from the lighting properties.Since it is impossible to explicitly see any geometry in this applica-tion, none is modeled. His method described light reflections fromcoats of fur using a fairly complex (although much less than geo-metric approaches) purely stochastic lighting model, and thus isn’tsuitable for applications that want to display geometric fur.Real-time fur methods have also been previously explored. Gelderand Wilhelms [1997] explore simply drawing lines using standardgraphics workstations. Although they didn’t get very realistic re-sults, they did show that it was possible to render fur at interactiverates.The second real-time technique of interest here was introduced in2001 by Lengyel et al [2001]. The method creates concentric shellsaround the model being rendered, each shell displaying a differ-ent part of a volume texture. The shells are transparent except forthe precomputed intersection of the shell and the fur volume thatthey created. When rendered together they create a very convinc-ing furry surface. While this works well when viewed from above,since the shells overlap, creating the illusion of continuous geom-etry, it doesn’t work for vertices near the silhouette of the objectsince the gaps between adjacent shells become apparent. To rem-edy this, they add small textures rendered normal to the surfaceacross all the edges in the model, which they fade in as the edgeapproaches the silhouette. These fin textures fill in the gaps in theshell textures on the silhouette, creating a complete geometric ren-der of fur. This technique can be viewed as a form of rendering theKajiya and Kay [1989] fur model in real-time, but using a simpleambient/diffuse/specular lighting scheme that doesn’t capture selfshadowing. Real-time lighting of microsurfaces is a difficult prob-lem, but one that must be solved to even approach realistic results.As discussed by Lokovic and Veach [2000], rendering realistic mi-crogeometry such as fur depends heavily on inter-object shadows,namely hair-to-hair shadows.One method that can be used to perform inter-object shadows withcomplicated geometry is Deep Shadow Maps [Lokovic and Veach2000]. Traditional shadow mapping basically renders the scenefrom the position of each light, storing the depth value for eachrender into depth-maps. The scene can then be re-rendered fromthe camera point of view, looking at the position of the first obe-ject hit for each fragment. If the distance between that object andthe light is greater than the corresponding depth-map value, thenthe surface isn’t the first one hit by the light and therefore is inshadow. If the depths match, then the object is the first one seen bythe light and should be lit. However, traditional shadow maps areprone to aliasing for high-frequency geometry (unless prohibitivelyhigh resolutions are used) and hence are not suitable for microge-ometry. Deep shadow maps, on the other hand, store a function ateach texel. This function defines the fractional visibility of the sur-face at all depths,


View Full Document

UMBC CMSC 635 - Real-Time Fur with Precomputed Radiance Transfer

Download Real-Time Fur with Precomputed Radiance Transfer
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Real-Time Fur with Precomputed Radiance Transfer and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Real-Time Fur with Precomputed Radiance Transfer 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?