Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Slide 26Slide 27Slide 28Slide 29Slide 30Slide 31Slide 32Slide 33Slide 34Slide 35Slide 36Slide 37Slide 38Slide 39Slide 40The GPGPU Creation StoryMikola LysenkoUniversity of Wisconsin-MadisonDept. of Mechanical EngineeringSpatial Automation LabOverviewGoing to cover major historical developments in roughly chronological orderTime period: 1970 – presentFocus on:HistoryKey technologiesMarket forcesReally a story about PC gamesThe Early Graphics DisplaysVector graphics (1960s)Based on the oscilloscopeConsist of electron gun + phosphor displayExamples:SketchPadAsteroidsPongDutchTronix (2007)Problems with Vector DisplaysNo colorThough you can put a filter over part of the screenLimited number of objectsCrude imagesWill always look like linesAtari, Inc. (1979)Raster DisplaysRepresent an image with a “framebuffer”A 2D array of pixelsRequires a RAMDAC:Random Access Memory Digital-to-Analog ConverterTranslates the framebuffer into a video signalCan be used to display any image (in theory)Early Raster GamesSprite based rendering (late 1980s-early 1990s)'Blit' rectangular subregions quickly to animate objectsLater games added scrolling backgroundsExamples: Duke Nukem, Commander KeenApogee (1991)id Software (1990)Ray Casting GamesEnabled by raster displays and faster CPUsRender a column of pixels per rayExamples: Wolfenstein (1992) & Doom (1995)id Software (1992)id Software (1993)Problems with Ray Casting3D is an illusionEnemies/objects are spritesWorld is limited to 2D geometryThough Doom had heightApogee (1994)PolygonsPioneered by SGI (1980s)Basic idea is to represent 3D objects by a collection of polygons on their boundaryMade famous in games by Quake (1996)Basis for all modern 3D graphicsid Software (1996)Polygon Rasterization1. Trace edges of polygon into 2 buffers (LEFT, RIGHT)LEFT RIGHTLEFT RIGHTPolygon Rasterization (cont.)2. Traverse LEFT-RIGHT scan lines in frame bufferPolygon Rasterization (final)Fill in the scan linesThat's it!ProblemsWe need to draw a lot of polygons to get a good approximation of a 3D sceneDrawing polygons is expensiveQuake was able to get around this with very aggressive visibility cullingCPU performance was not scaling fast enoughFirst 3D HardwareOriginally developed for Hollywood and industrial/academic applicationsFirst machines built by SGI (Iris) (late 1980s)Brought to mass market by 3Dfx (1996)id Software (1998)How the First GPUs WorkedUse PCI slot, hijacked video signalCould not do 2D and 3D simultaneously!No windowed mode 3DOnly drew 1 triangle at a time3Dfx Interactive, (1996)Further EvolutionOriginally, PCI was fast enoughThe GPU could barely keep up!But triangle rendering got fasterNeeded more bandwidth to use this speedMade new AGP slot ...Then AGP 2X, 4X, 8X ...Then PCI-E, PCI-Ex16 ...Asymptotically, this is a losing battleHardware CachingOn board RAM was getting cheaperTake some of the texture memory and use it to store vertices!Then the CPU can draw many polygons with a single command, saving a lot of bandwidthBUT... only works for static objectsWhat about animated characters?Vertex ProgramsAllow programmers to parameterize animationsGPU runs a small user defined program for each vertexCan generate an arbitrary animated shape from a few parameters and some cached memoryHow Vertex Programs WorkVertices and topology are stored on the GPUEach vertex is a collection of user-defined values called 'attributes'e.g. Position, Color, Texture Coordinates, etc.May also pass 'uniform' values that are constant for all shaderse.g. Transform matrix, animation coordinates, etc.Attributes + Topology + Uniforms + Vertex Program => Triangles!Example: Skinning(Skeletal Animation)Parameterize animations for a character in terms of a collection of 'bones'Vertices form a 'skin'Attributes:Relative positionBone weightsColor/TextureMartinez, “Ontology for Virtual Humans” (2007)Skinning (cont.)To draw a character, call the vertex program with bone positions passed as uniformsVertex program interpolates verticesBandwidth savings:Thousands of vertices, each with >30 bytes of datavs.A handful of bones (16 bytes each)Skinning ResultsMore characters, more polygonsFaster renderingFlexible animationCan do ragdollsWin-win situationCroteam, (2001)Rendering QualityOver time, people began to look for ways to add more detail to scenes, beyond simple texturingAlternate Lighting ModelsBump MappingShadowsEnvironment MappingThese techniques produced higher quality images, but required special GPU featuresProliferation of FeaturesTo keep up, hardware vendors kept adding more gizmosNew updates almost every monthA programmer's nightmare!Impossible to take advantage of all the new featuresBad for codersBad for gamersBad for video card manufacturersFragment ShadersMake per-pixel color calculations programmableReduces interface complexityImplementing features no longer vendor's problemOpened up brand new rendering techniquesFar too many to even begin covering in this talkDramatic improvements in image qualityWhere does the term “Shader” come from?Shaders were invented by Pixar for their Renderman platform (1982)Meant to help artists specify lighting / texturesUsed in Toy StoryPixar, (1995)Fragment Shader ExamplesDecaudin, “Cartoon-LookingRendering of 3D Scenes” (1996)Gosselin, “Phat Lewt: Drawing a Diamond” (2004)Heidrich & Seidel, “Realistic Hardware Accelerated Shading and Lighting” (1999)Tatarchuk, “Practical Dynamic Parallax Occlusion Mapping” (2005)D'Eon, “NVIDIA Human Head Demo” (2007)Lengyel et al. “Realtime Fur Over Arbitrary Surfaces” (2001)GPU Shaders TodayCan create beautiful, lifelike imageryRequires massive computing powerFast memory, many processorsBethesda Softworks, (2007)Exponential Growth of GPU PowerTo keep up with demands from gaming, GPUs grew very powerfulSource: linux-mag.comThe Birth of GPGPUGPUs had become very powerfulParallel execution for processing vertices/fragmentsMassive memory
View Full Document