Lighting, part 2CSE167: Computer GraphicsInstructor: Steve RotenbergUCSD, Fall 2006Triangle Rendering The main stages in the traditional graphics pipeline are: Transform Lighting Clipping / Culling Scan Conversion Pixel RenderingLighting Lighting is an area of the graphics pipeline that has seen no limits to its complexity and continues to promote active research Many advanced lighting techniques very complex and require massive computational and memory resources New algorithms continue to be developed to optimize various pieces within these different areas The complexity of lighting within the context of photoreal rendering completely dwarfs the other areas of the graphics pipeline to the point where it can almost be said that rendering is 99% lighting The requirements of photoreal lighting have caused radical modifications to the rendering process to that point that modern high quality rendering has very little resemblance to the processes that we’ve studied so far For one thing, so far, we’ve talked about rendering triangle-by-triangle, whereas photoreal rendering is generally done pixel-by-pixel, but we will look at these techniques in more detail in a later lectureBasic Lighting & Texturing So far, we have mainly focused on the idea of computing lighting at the vertex and then interpolating this across a triangle (Gouraud shading), and then combining it with a texture mapped color to get the final coloring for a pixel We also introduced the Blinn reflection model which treats a material reflectivity as having a sum of diffuse and specular components We saw that zbuffering is a simple, powerful technique of hidden surface removal that allows us to render triangles in any order, and even handle situations where triangles intersect each other We also saw that texture mapping (combined with mipmapping to fix the shimmering problems) is a nice way to add significant detail without adding tons of triangles We haven’t covered transparency or fog yet, but we will in just a moment This classic approach of ‘Blinn lit, Gouraud shaded, z-buffered, mipmappedtriangles with transparency and fog’ essentially forms the baseline of what one needs to achieve any sort of decent quality in a 3D rendering The biggest thing missing is shadows, but with a few tricks, one can achieve this as well as a wide variety of other effectsBlinn-Gouraud-zbuffer-mipmap-fog-transparency This was the state of the art in software rendering back around 1978, requiring only a couple hours to generate a decent image (on a supercomputer) This was the state of the art in realtime graphics hardware in 1988, allowing one to render perhaps 5000 triangles per frame, at 720x480 resolution, at 60 frames per second (assuming one could afford to spend $1,000,000+ for the hardware) By the late 1990’s, consumer hardware was available that could match that performance for under $200 The Sony PS2 essentially implements this pipeline, and can crank out maybe 50,000 triangles per frame at 60 Hz The XBox was the first video game machine to progress beyond this basic approach, and high end PC graphics boards were starting to do it a couple years before the XBox (maybe around 2000) Modern graphics boards support general purpose programmable transformation/lighting operations per vertex, as well as programmable per-pixel operations including Phong shading, per-pixel lighting, and more, but still operate on one triangle at a time, and so still fall within the classification of traditional pipeline renderersPer Vertex vs. Per Pixel Lighting We can compute lighting per vertex and interpolate the color (Gouraud shading) Or we can interpolate the normals and compute lighting per pixel (Phong shading) The two approaches compute lighting at different locations, but still can use exactly the same techniques for computing the actual lighting In either case, we are still just computing the lighting at some position with some normalClassic Lighting Model The classic approach to lighting is to start by defining a set of lights in the scene There are a variety of simple light types, but the most basic ones are directional and point lights Each light in the scene needs to have its type specified as well as any other relevant properties (color, position, direction…). Geometric properties are usually specified in world space, although they may end up getting transformed to camera space, depending on the implementation And a set of materials Materials define properties such as: diffuse color, specular color, and shininess And then a bunch of triangles Each triangle has a material assigned to it Triangles can also specify a normal for each vertex Then we proceed with our rendering: When we render a triangle, we first apply the lighting model to each vertex For each vertex, we loop through all of the lights and compute how that light interacts with the position, normal, and unlit color of the vertex, ultimately computing the total color of the light reflected in the direction of the viewer (camera) This final color per vertex value is interpolated across the triangle in the scan conversion and then combined with a texture color at the pixel levelLightingnl1el2l3materialPoint lightDirectional lightPoint lightc1c3c2c=?vCameraIncident Light To compute a particular light’s contribution to the total vertex/pixel color, we start by computing the color of the incident light The incident light color clgtrepresents the actual light reaching the surface in question For a point light, for example, the actual incident light is going to be the color of the source, but will be attenuated based on the inverse square law (or some variation of it) We also need to know the incident light direction. This is represented by the unit length vector l (that’s supposed to be a lower case L) Computing the incident light color & direction is pretty straightforward, but will vary from light type to light typeReflection Models Different materials reflect light in different patterns The way a particular material reflects light is referred to as areflection model (or sometimes as a lighting model, shading model, local illumination model, scattering model, or BRDF) The reflection model takes the direction and color of the incident light coming from a particular light source and computes what color of light is going to be
View Full Document