DOC PREVIEW
Princeton COS 598B - A Video-Based Rendering Acceleration Algorithm

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

AbstractKeywords1 Introduction1.1 Main Contribution2 Related Work2.1 Interactive Display of Large Datasets2.1.1 Geometric Models2.1.2 Image-Based Representations2.1.4 Visibility Culling2.2 Combining Graphics with Video2.3 Video for Multimedia Applications3 Overview3.1 Cell-Based Walkthrough3.2 Cells and Portals3.3 Virtual Cells3.4 Video-Based Impostors3.4.1 Creating Impostors3.4.2 Video Compression of Impostors3.5 Offline encoding3.5.1 Mapping Cells to StreamsChoice of Encoding Algorithm3.5.3 Encoding Parameters4 ImplementationTools for MPEG manipulation4.1.1 Offline encoding4.1.2 Runtime decoding4.3 Cell structure4.4 Preprocessing4.5 System Pipeline4.5.1 View Management Task4.5.2 Prefetching Task4.6 Memory Management5 Performance and Results5.1 Overall Rendering Acceleration5.2 Breakdown of Time Per Frame5.3 Preprocessing5.4 Analysis of Results6 Conclusions and Future WorkAcknowledgements8 ReferencesA Video-Based Rendering Acceleration Algorithm for Interactive Walkthroughs Andrew Wilson, Ming C. Lin, Dinesh Manocha Department of Computer Science CB 3175, Sitterson Hall University of North Carolina at Chapel Hill Chapel Hill, NC 27599 {awilson,lin,dm}@cs.unc.edu Boon-Lock Yeo, Minerva Yeung Intel Corporation Microcomputer Research Labs 2200 Mission College Blvd. Santa Clara, CA 95052 [email protected] http://www.cs.unc.edu/~geom/Video Abstract We present a new approach for faster rendering of large synthetic environments using video-based representations. We decompose the large environment into cells and pre-compute video based impostors using MPEG compression to represent sets of objects that are far from each cell. At runtime, we decode the MPEG streams and use rendering algorithms that provide nearly constant-time random access to any frame. The resulting system has been implemented and used for an interactive walkthrough of a model of a house with 260,000 polygons and realistic lighting and textures. It is able to render this model at 16 frames per second (an eightfold improvement over simpler algorithms) on average on a Pentium II PC with an off-the-shelf graphics card. Keywords Massive models, architectural walkthrough, MPEG video compression, virtual cells, video-based impostors 1 Introduction One of the fundamental problems in computer graphics and virtual environments is interactive display of complex environments on current graphics systems. Large environments composed of tens of millions of primitives are frequently used in computer-aided design, scientific visualization, 3D audio-visual and other sensory exploration of remote places, tele-presence applications, visualization of medical datasets, etc. The set of primitives in such environments includes geometric primitives like polygonal models or spline surfaces, samples of real-world objects acquired using cameras or scanners, volumetric datasets, etc. It is a major challenge to render these complex environments at interactive rates, i.e. 30 frames a second, on current graphics systems. Furthermore, the sizes of these data sets appear to be increasing at a faster rate than the performance of graphics systems. One of the driving applications for interactive display of large datasets is interactive walkthroughs. The main goal is to create an interactive computer graphics system that enables a viewer to experience a virtual environment by simulating a walkthrough of the model. Possible applications of such a system include design evaluation of architectural models [Brooks86,Funkhouser93], simulation-based design of large CAD datasets [Aliaga99], virtual museums and places [Mannoni97], etc. The development of a complete walkthrough system involves providing different kinds of feedback to a user, including visual, haptic, proprioceptive and auditory feedback, at interactive rates [Brooks86]. Real-time feedback as the user moves is perhaps the most important component of a satisfying walkthrough system. This faithful response to user spontaneity is what distinguishes a synthetic environment from precomputed images or frames, which can take minutes or even hours per frame to calculate, and from pre-recorded video. In this paper, we focus on the problem of generating visual updates at interactive rates for complex environments. Figure 1: CAD database of a house with realistic lighting and texture. The model has over 260,000 polygons and 19 megabytes of high-resolution texture maps. This model is too large to be naively rendered at interactive rates.There is considerable research on rendering acceleration algorithms to display large datasets at interactive frame rates on current graphics system. These algorithms can be classified into three major categories: visibility culling, multi-resolution modeling, and image-based representations. However, no single algorithm or approach can successfully display large datasets at interactive rates from all viewpoints. Some hybrid approaches that have been investigated use image-based representations to render “far” objects [Maciel95,Shade96,Aliaga96,Aliaga99] and geometric representations for “near” objects [Cohen97,Erikson99,Garland97,Hoppe96]. Commonly used image-based representations include texture maps, textured depth meshes, layered depth images [Aliaga99] etc. However, in terms of application to large models, these image-based representations have the following drawbacks: •= Sampling: Most of the algorithms take a few finite samples of a large data set. No good algorithms are known for automatic generation of samples for a large environment. •= Reconstruction: Different reconstruction techniques have been proposed to reconstruct an image from a new viewpoint. While some of them do not result in high fidelity images, others require special purpose hardware for interactive updates. •= Representation and Storage: A large set of samples takes considerable storage. No good algorithms are known for automatic management of host and texture memory devoted to these samples. 1.1 Main Contribution In this paper, we present a method for accelerating the rendering of large synthetic environments using video-based representations. Video-based techniques have been widely used for capture, representation and display of real-world datasets. We propose the use of video based impostors for representing synthetic environments and rendering these scenes at interactive rates on current high-end and low-end graphics systems. We use a


View Full Document

Princeton COS 598B - A Video-Based Rendering Acceleration Algorithm

Documents in this Course
Lecture

Lecture

117 pages

Lecture

Lecture

50 pages

Load more
Download A Video-Based Rendering Acceleration Algorithm
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Video-Based Rendering Acceleration Algorithm and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Video-Based Rendering Acceleration Algorithm 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?