DOC PREVIEW
UCF COT 4810 - Interaction within Multimodal Environments in a Collaborative Setting

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Interaction within Multimodal Environments in a Collaborative Setting Glenn A. Martin, Jason Daly and Casey Thurston Institute for Simulation & Training, University of Central Florida 3280 Progress Dr., Orlando, FL 32826 {martin, jdaly, cthursto}@ist.ucf.edu Abstract Much attention has been given to the topic of multimodal environments as of late. Research in haptics is thriving and even some olfactory research is appearing. However, while there has been research in collaborative environments in the general sense, minimal focus has been given to the interaction present in multimodal environments within a collaborative setting. This paper presents research into a network architecture for collaborative, multimodal environments with a focus on interaction. Within a dynamic world three types of human interaction are possible: user-to-user, world-to-user, and user-to-world. The first two form what are fundamentally multimodal environments; the user receives feedback using various modalities both from other users and the world. The last one forms the area of dynamic environments where the user can affect the world itself. The architecture presented here provides a flexible mechanism for studying multimodal environments in this distributed sense. Within such an architecture, latency and synchronization are key concepts when bringing multimodal environments to a collaborative setting. 1 Introduction Virtual environments have existed for many years. However, despite numerous distributed simulations, including training uses in many domains, there still is relatively little interaction within these virtual environments. Most allow only navigation and perhaps some minimal interaction. For example, military simulations allow basic combat-oriented interaction (shooting, throwing a grenade, etc.), but only few of those incorporate advanced modalities (such as haptics or olfactory) or dynamic changes to the environment itself. 2 Types of Interaction Within a distributed, interactive virtual environment there are many components to address: the user, the environment and other users. In categorizing the types of interaction we have split them into four categories: user-to-user, world-to-user, user-to-world and world-to-world. The first two types form what are fundamentally multimodal environments. Whether from another user or from the world, the user gets feedback from the various modalities. In the case of user-to-user it may be grabbing a shoulder or shooting an opponent. For world-to-user it may be bumping into a piece of furniture in the virtual environment. The latter two types of interaction form what we call dynamic environments. For example, the user can act upon the world by moving an object or changing the characteristics of an object (e.g. watering a virtual plant making the soil damper). Similarly, within a world-to-world interaction a rain model might change the soil moisture. In this paper we will focus on the user-to-world interactions, however. 3 Multimodal Environments In terms of virtual environments, a multimodal environment is one that the user experiences and interacts with using more than one sense or interaction technique. Strictly speaking, an environment that provides visual and auditory feedback is a multimodal environment; however, the more interesting environments are those that employ haptic and/or olfactory feedback in addition to visual and auditory. Gustatory, or taste feedback, is also possible, but there is almost no evidence of this in the literature, so we will not discuss it here. On the interaction side, systems thattake input using more than one technique can also be considered multimodal. For example, a system that provides a joystick for locomotion as well as a speech recognition system for accepting commands to alter the environment can be called a multimodal environment. In this section, we focus on the senses and how the user experiences a multimodal environment, as well as a software architecture that supports the modeling and design of multimodal environments in a seamless way. Though it is difficult to develop a clear metric to determine whether adding more senses contributes to a greater sense of presence or immersion, there have been studies conducted that suggested this (Meehan, Insko, Whitton & Brooks, 2002). Also, Stanney et al. have shown that adding sensory channels allows the user to process more data than the same amount of information presented over a single sensory channel (2004). 3.1 Visual Apart from a few specialized applications, a virtual environment always provides visual feedback. The visual sense is essential for navigation and detailed exploration of the environment. While other senses may introduce an event, entity, or object to investigate, it is ultimately the visual sense that will be used to analyze and deal with the object of interest. For this reason, an environment designed to be multimodal cannot ignore or diminish the importance of vision, and an appropriate amount of resources should be spent on creating high-fidelity visuals. Visual feedback is essential to all types of collaborative interaction. Signals, gestures, and other visual interaction allow user-user collaboration. The primary means for the user to experience the world is through visual exploration. Finally, the ability for the user to visually perceive the world is a prerequisite for user-world interaction. 3.2 Auditory Most modern environments also provide some level of auditory feedback. It may be as simple as a beep or click when a certain event happens, or it may involve realistic, spatialized sounds coordinated with virtual objects and enhanced with environmental effects. The auditory sense is a very useful and natural channel for informing the user of events that occur outside his or her field of view. Simple speech is probably the most natural user to user interaction in a collaborative environment. With a high-fidelity acoustic rendering, the user may also be able to identify and keep track of one or more sound sources without relying on vision (some examples are a bird in a tree, a splashing stream, or gunshots from a sniper positioned in a second-story window). Also, environmental effects such as reflection and reverberation can give clues to the size and composition of the immediate environment or room. These are two examples of world-user interactions using the auditory channel.


View Full Document

UCF COT 4810 - Interaction within Multimodal Environments in a Collaborative Setting

Documents in this Course
Spoofing

Spoofing

25 pages

CAPTCHA

CAPTCHA

18 pages

Load more
Download Interaction within Multimodal Environments in a Collaborative Setting
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Interaction within Multimodal Environments in a Collaborative Setting and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Interaction within Multimodal Environments in a Collaborative Setting 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?