Ray TracingCSE167: Computer GraphicsInstructor: Steve RotenbergUCSD, Fall 2005Ray TracingRay tracing is a powerful rendering technique that is the foundation of many modern photoreal rendering algorithmsThe original ray tracing technique was proposed in 1980 by Turner Whitted, although there were suggestions about the possibility in scientific papers dating back to 1968Classic ray tracing shoots virtual view rays into the scene from the camera and traces their paths as they bounce aroundWith ray tracing, one can achieve a wide variety of complex lighting effects, such as accurate shadows and reflections/refractions from curved surfacesAchieving these effects with the same precision is difficult if not impossible with a more traditional rendering pipelineRay tracing offers a big advance in visual quality, but comes with an expensive price of notoriously slow rendering timesRay IntersectionsTracing a single ray requires determining if that ray intersects any one of potentially millions of primitivesThis is the basic problem of ray intersectionMany algorithms exist to make this not only feasible, but remarkably efficientTracing one ray is a complex problem and requires serious work to make it run at an acceptable speedOf course, the big problem is the fact that one needs to trace lots of rays to generate a high quality imageRaysRecall that a ray is a geometric entity with an origin and a directionA ray in a 3D scene would probably use a 3D vector for the origin and a normalized 3D vector for the directionclass Ray {Vector3 Origin;Vector3 Direction;};Camera RaysWe start by ‘shooting’ rays from the camera out into the sceneWe can render the pixels in any order we choose (even in random order!), but we will keep it simple and go from top to bottom, and left to rightWe loop over all of the pixels and generate an initial primary ray (also called a camera ray or eye ray)The ray origin is simply the camera’s position in world spaceThe direction is computed by first finding the 4 corners of a virtual image in world space, then interpolating to the correct spot, and finally computing a normalized direction from the camera position to the virtual pixelCamera positionVirtual imagePrimary rayRay IntersectionThe initial camera ray is then tested for intersection with the 3D scene, which contains a bunch of triangles and/or other primitivesIf the ray doesn’t hit anything, then we can color the pixel to some specified ‘background’ colorOtherwise, we want to know the first thing that the ray hits (it is possible that the ray will hit several surfaces, but we only care about the closest one to the camera)For the intersection, we need to know the position, normal, color, texture coordinate, material, and any other relevant information we can get about that exact locationIf we hit somewhere in the center of a triangle, for example, then this information would get computed by interpolating the vertex dataRay IntersectionWe will assume that the results of a ray intersection test are put into some data structure which conveniently packages it togetherclass Intersection {Vector3 Position;Vector3 Normal;Vector2 TexCoord;Material *Mtl;float Distance; // Distance from ray origin to intersection};LightingOnce we have the key intersection information (position, normal, color, texture coords, etc.) we can apply any lighting model we wantThis can include procedural shaders, lighting computations, texture lookups, texture combining, bump mapping, and moreMany of the most interesting forms of lighting involve spawning off additional rays and tracing them recursivelyThe result of the lighting equation is a color, which is used to color the pixelShadow RaysShadows are an important lighting effect that can easily be computed with ray tracingIf we wish to compute the illumination with shadows for a point, we shoot an additional ray from the point to every light sourceA light is only allowed to contribute to the final color if the ray doesn’t hit anything in between the point and the light sourceThe lighting equation we looked at earlier in the quarter can easily be adapted to handle this, as clgti will be 0 if the light is blockedObviously, we don’t need to shoot a shadow ray to a light source if the dot product of the normal with the light direction is negativeAlso, we can put a limit of the range of a point light, so they don’t have an infinite influence (bending the laws of physics)( ) ( )( )∑⋅+⋅+=sispecidifilgtambambhnmlnmccmc **Shadow RaysShadow RaysShadow rays behave slightly differently from primary (and secondary) raysNormal rays (primary & secondary) need to know the first surface hit and then compute the color reflected off of the surfaceShadow rays, however, simply need to know if something is hit or notIn other words, we don’t need to compute any additional shading for the ray and we don’t need to find the closest surface hitThis makes them a little faster than normal raysOffsetting Spawned RaysWe say that the shadow rays are spawned off of the surface, or we might say that the primary ray spawned off additional shadow raysWhen we spawn new rays from a surface, it is usually a good idea to apply a slight adjustment to the origin of the ray to push it out slightly (0.00001) along the normal of the surfaceThis fixes problems due to mathematical roundoff that might cause the ray to spawn from a point slightly below the surface, thus causing the spawned ray to appear to hit the same surfaceReflection RaysAnother powerful feature often associated with ray tracing is accurate reflections off of complex surfacesIf we wanted to render a surface as a perfect mirror, instead of computing the lighting through the normal equation, we just create a new reflection ray and trace it into the sceneRemember that primary rays are the initial rays shot from the camera. Any reflected rays (and others, like refracted rays, etc.), are called secondary raysReflected rays, like shadow rays should be moved slightly along the surface normal to prevent the ray from re-intersecting the same surfaceComputing Reflection Directiondn r( )nnddr ⋅−= 2ReflectionsIf the reflection ray hits a normal material, we just compute the illumination and use that for the final colorIf the reflection ray hits another mirror, we just recursively generate a new reflection ray and
View Full Document