New version page

Standardized Evaluation of Haptic Rendering Systems

Upgrade to remove ads

This preview shows page 1-2-3 out of 8 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Standardized Evaluation of Haptic Rendering Systems Emanuele Ruffaldi1, Dan Morris2, Timothy Edmunds3, Federico Barbagli2, Dinesh K.Pai3 1PERCRO, Scuola Superiore S. Anna 2Computer Science Department, Stanford University 3Computer Science Department, Rutgers University [email protected], {dmorris,barbagli}@robotics.stanford.edu, {tedmunds,dpai}@cs.rutgers.edu ABSTRACT The development and evaluation of haptic rendering algorithms presents two unique challenges. Firstly, the haptic information channel is fundamentally bidirectional, so the output of a haptic environment is fundamentally dependent on user input, which is difficult to reliably reproduce. Additionally, it is difficult to compare haptic results to real-world, “gold standard” results, since such a comparison requires applying identical inputs to real and virtual objects and measuring the resulting forces, which requires hardware that is not widely available. We have addressed these challenges by building and releasing several sets of position and force information, collected by physically scanning a set of real-world objects, along with virtual models of those objects. We demonstrate novel applications of this data set for the development, debugging, optimization, evaluation, and comparison of haptic rendering algorithms. CR Categories: H.5.2 [User Interfaces]: Haptic I/O Keywords: haptics, ground truth, evaluation 1. INTRODUCTION AND RELATED WORK Haptic rendering systems are increasingly oriented toward representing realistic interactions with the physical world. Particularly for simulation and training applications, intended to develop mechanical skills that will ultimately be applied in the real world, fidelity and realism are crucial. A parallel trend in haptics is the increasing availability of general-purpose haptic rendering libraries [1,2,3], providing core rendering algorithms that can be re-used for numerous applications. Given these two trends, developers and users would benefit significantly from standard verification and validation of haptic rendering algorithms. In other fields, published results often “speak for themselves” – the correctness of mathematical systems or the realism of images can be validated by reviewers and peers. Haptics presents a unique challenge in that the vast majority of results are fundamentally interactive, preventing consistent repeatability of results. Furthermore, it is difficult at present to distribute haptic systems with publications, although several projects have attempted to provide deployable haptic presentation systems [1,4]. Despite the need for algorithm validation and the lack of available approaches to validation, little work has been done in providing a general-purpose system for validating the physical fidelity of haptic rendering systems. Kirkpatrick and Douglas [5] present a taxonomy of haptic interactions and propose the evaluation of complete haptic systems based on these interaction modes, and Guerraz et al [6] propose the use of physical data collected from a haptic device to evaluate a user’s behavior and the suitability of a device for a particular task. Neither of these projects addresses realism or algorithm validation. Raymaekers et al [7] describe an objective system for comparing haptic algorithms, but do not correlate their results to real-world data and thus do not address realism. Hayward and Astley [8] present standard metrics for evaluating and comparing haptic devices, but address only the physical devices and do not discuss the software components of haptic rendering systems. Similarly, Colgate and Brown [9] present an impedance-based metric for evaluating haptic devices. Numerous projects (e.g. [10,11]) have evaluated the efficacy of specific haptic systems for particular motor training tasks, but do not provide general-purpose metrics and do not address realism of specific algorithms. Along the same lines, Lawrence et al [12] present a perception-based metric for evaluating the maximum stiffness that can be rendered by a haptic system. This paper addresses the need for objective, deterministic haptic algorithm verification and comparison by presenting a publicly available data set that provides forces collected from physical scans of real objects, along with polygonal models of those objects, and several analyses that compare and/or assess haptic rendering systems. We present several applications of this data repository and these analysis techniques:• Evaluation of rendering realism: comparing the forces generated from a physical data set with the forces generated by a haptic rendering algorithm allows an evaluation of the physical fidelity of the algorithm. • Comparison of haptic algorithms: Running identical inputs through multiple rendering algorithms allows identification of the numeric strengths and weaknesses of each. • Debugging of haptic algorithms: identifying specific geometric cases in which a haptic rendering technique diverges from the correct results allows the isolation of implementation bugs or scenarios not handled by a particular approach, independent of overall accuracy. • Performance evaluation: Comparing the computation time required for the processing of a standard set of inputs allows objective comparison of the performance of specific implementations of haptic rendering algorithms. The data and analyses presented here assume an impedance-based haptic rendering system and a single point of contact between the haptic probe and the object of interested. This work thus does not attempt to address the full range of possible contact types or probe shapes. Similarly, this work does not attempt to validate the realism of an entire haptic rendering pipeline, which would require a consideration of device and user behavior and perceptual psychophysics. Rather, we present a data set and several analyses that apply to a large (but not universal) class of haptic rendering systems. We leave the extension of this approach to a wider variety of inputs and to more sophisticated metrics as future work. The remainder of this paper is structured as follows: Section 2 will describe our system for physical data acquisition, Section 3 will describe the process by which we simulate a contact trajectory for evaluation of a haptic rendering algorithm, Section 4 will describe some example results we have obtained through this process, and Section 5 will discuss the limitations of our method


Download Standardized Evaluation of Haptic Rendering Systems
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Standardized Evaluation of Haptic Rendering Systems and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Standardized Evaluation of Haptic Rendering Systems 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?