DOC PREVIEW
Princeton COS 598B - A System for Implementing Intelligent Camera Control

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CamDroid: A System for Implementing Intelligent Camera Control Steven M. Drucker David Zeltzer MIT Media Lab MIT Research Laboratory for Electronics Massachusetts Institute of Technology Cambridge, MA. 02 139, USA [email protected] [email protected] Abstract In this paper, a method of encapsulating camera tasks into well defined units called “camera modules” is described. Through this encapsulation, camera modules can be programmed and sequenced, and thus can be used as the underlying framework for controlling the virtual camera in widely disparate types of graphi- cal environments. Two examples of the camera framework are shown: an agent which can film a conversation between two virtual actors and a visual programming language for filming a virtual football game. Keywords: Virtual Environments, Camera Control, Task Level Interfaces. 1. Introduction Manipulating the viewpoint, or a synthetic camera, is fundamental to any interface which must deal with a three dimensional graphi- cal environment, and a number of articles have discussed various aspects of the camera control problem in detail [3,4,5, 191. Much of this work, however, has focused on techniques for directly manipulating the camera. In our view, this is the source of much of the difficulty. Direct con- trol of the six degrees of freedom (DOFs) of the camera (or more, if field of view is included) is often problematic and forces the human VE participant to attend to the interface and its “control knobs” in addition to - or instead of - the goals and constraints of the task at hand. In order to achieve task level interaction with a computer-mediated graphical environment, these low-level, direct controls. must be abstracted into higher level camera primitives, and in turn, combined into even higher level interfaces. By clearly specifying what specific tasks need to be accomplished at a partic- ular unit of time, a wide variety of interfaces can be easily con- structed. This technique has already been successfully applied to interactions within avirtual Museum [8]. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 1995 Symposium on Interactive 3D Graphics, Monterey CA USA @ 1995 ACM O-89791 -736-7/95/0004...$3.50 2. Related Work Ware and Osborne [19] described several different metaphors for exploring 3D environments including “scene in hand,” “eyeball in hand,” and “flying vehicle control” metaphors. All of these use a 6 DOF input device to control the camera position in the virtual envi- ronment. They discovered that flying vehicle control was more use- ful when dealing with enclosed spaces, and the “scene in hand” metaphor was useful in looking at a single object. Any of these metaphors can be easily implemented in our system. Mackinlay et al [ 161 describe techniques for scaling camera motion when moving through virtual spaces, so that, for example, users can always maintain precise control of the camera when approach- ing objects of interest. Again, it is possible to implement these techniques using our camera modules. Brooks [3,4] discusses several methods for using instrumented mechanical devices such as stationary bicycles and treadmills to enable human VE participants to move through virtual worlds using natural body motions and gestures. Work at Chapel Hill, has, of course, focused for some time on the architectural “walk- through,” and one can argue that such direct manipulation devices make good sense for this application. While the same may be said for the virtual museum, it is easy to think of circumstances - such as reviewing a list of paintings - in which it is not appropriate to require the human participant to physically walk or ride a bicycle. At times, one may wish to interact with topological or temporal abstractions, rather than the spatial. Nevertheless, our camera mod- ules will accept data from arbitrary input devices as appropriate. Blinn [2] suggested several modes of camera specification based on a description of what should be placed in the frame rather than just describing where the camera should be and where it should be aimed. Phillips et al suggest some methods for automatic viewing control [ 181. They primarily use the “camera in hand” metaphor for view- ing human figures in the JackTM system, and provide automatic fea- tures for maintaining smooth visual transitions and avoiding viewing obstructions. They do not deal with the problems of navi- gation, exploration or presentation. 139Karp and Feiner describe a system for generating automatic pre- sentations, but they do not consider interactive control of the cam- era [12]. Gleicher and Witkin [lo] describe a system for controlling the movement of a camera based on the screen-space projection of an object, but their system works primarily for manipulation tasks. Our own prior work attempted to establish a procedural framework for controlling cameras [7]. Problems in constructing generalizable procedures led to the current. constraint-based framework described here. Although this paper does not concentrate on meth- ods for satisfying multiple constraints on the camera position, this is an important part of the overall camera framework we outline here. For a more complete reference, see [9]. An earlier form of the current system was applied to the domain of a Virtual Museum [8]. 3. CamDroid System Design This framework is a formal specification for many different types of camera control. The central notion of this framework is that camera placement and movement is usually done for particular


View Full Document

Princeton COS 598B - A System for Implementing Intelligent Camera Control

Documents in this Course
Lecture

Lecture

117 pages

Lecture

Lecture

50 pages

Load more
Download A System for Implementing Intelligent Camera Control
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A System for Implementing Intelligent Camera Control and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A System for Implementing Intelligent Camera Control 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?