DOC PREVIEW
The Tolerance for Visual Feedback Distortions in a Virtual environment

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IntroductionMethodsApparatusProcedureResultsDiscussionReferencesThe tolerance for visual feedback distortions in a virtual environmentYoky Matsuokaa,*, Sonya J. Allinb, Roberta L. KlatzkycaRobotics Institute and Mechanical Engineering, Carnegie Mellon University, 5000 Forbes Avenue, NSH3207, Pittsburgh, PA 15213, USAbHuman Computer Interface Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USAcDepartment of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USAReceived 9 July 2002; accepted 4 September 2002AbstractWe are interested in using a virtual environment with a robotic device to extend the strength and mobility of people recovering fromstrokes by steering them beyond what they had thought they were capable of doing. Previously, we identified just noticeable differences(JND) of a finger’s force production and position displacement in a virtual environment. In this paper, we extend this investigation byidentifying peoples’ tolerance for distortions of visual representations of force production and positional displacement in a virtualenvironment. We determined that subjects are not capable of reliably detecting inaccuracies in visual representation until there is 36%distortion. This discrepancy between actual and perceived movements is significantly larger than the JNDs reported in the past, indicatingthat a virtual robotic environment could be a valuable tool for steering actual movements further away from perceived movements. Webelieve this distorted condition may allow people recovering from strokes, even those who have perceptual or cognitive deficits, torehabilitate with greater ease.D 2002 Elsevier Science Inc. All rights reserved.Keywords: Robotic rehabilitation; Virtual environment; Just noticeable difference; Feedback distortion1. IntroductionThe motor impairment of people recovering from strokesis often localized to one side of the body and to one area inparticular, such as the arm or the hand. Motor recovery occursas the nervous system rewires its neural circuits to representlost functions at a new neural location. Recently, novelrehabilitation techniques such as constraint-induced therapy,biofeedback therapy, and robot-assisted therapy have beenemployed [1–3]. Specifically, robotic techniques enable theprecise recording of movem ents and varia ble force applica-tions to an affected limb, making it an effective strategy formotor rehabilitation. According to recent studies in robotics,robot-assisted stroke rehabilitation enhances arm movementrecovery [2]. Moreover, robot-assisted r ehabilitationimproves patients’ mobility and strength to the point whereit is equal to or greater than that which is achieved by human-assisted therapy [3–5]. However, none of the currentlyavailable systems addresses patients’ c ognitive or perceptivedeficits, which may provide patients with a false perceptionof their own ability. This false perception has been imp licatedas a potential factor in inhibiting motor recovery in rehab-ilitation technique that exists to date. To overcome this issue,we plan to create a perceptual motor rehabi litation techniqueusing a virtual environment with a robotic device. Thistechnique will make use of the perceptual gap that we willproduce between the virtual and the real environments bydistorting the virtual feedback by an imperceptible amount,and the lowest bound of this imperceptible distortion isdetermined by the just noticeable differences (JND) in forceand position. This technique will extend current roboticrehabilitation techniques by creating an environment wherepatients can improve their mobility and strength withoutconscious effort, thereby addressing the needs of patientswho may have false perceptions about their abilities.A JND is defined as the percentage of increase instimulus that is required to reliably distingu ish two stimuli.Much prior research has focused on JNDs for force inhuman subjects, but none, to our knowledge, has tailoredits findings to the rehabilitation domain. JNDs for lifting 2-or 32-oz. weights by hand and arm were determined to be0031-9384/02/$ – see front matter D 2002 Elsevier Science Inc. All rights reserved.PII: S 0031-9384(02)00914-9* Corresponding author. Tel.: +1-412-268-8127; fax: +1-412-268-6436.E-mail address: [email protected] (Y. Matsuoka).Physiology & Behavior 77 (2002) 651 – 655roughly 10% [6,7]. JNDs for a force-matching task aboutthe elbow were determined to be between 5% and 9% [8]and between 5% and 10% for pinching motions between thefinger and the thumb with a constant resisting force [9]. Thepinching JND was found to be relatively constant over baseforces ranging between 2.5 and 10 N.In previous research, we conducted experiments withhealthy subjects to derive JNDs for both force productionand positional displacement using the same virtual envir-onment with the same robotic device that we used in theexperiment that is described in this paper (preliminaryresults in Ref. [10]). Using this environment, we derivedJNDs while su bjects moved their index fingers againstresistive force produced by the robotic device. For a forceJND, subjects were asked to sample pairs of forces bypressing their index fingers against the force produced bythe robotic device while their fingers were occlud ed, andvisual feedback of the force was provided. The roboticdevice produced force between 1.8 and 4.0 N. Our resultsrevealed an average JND of 14.4%. We also conducted asimilar experiment for a positional displacement JND, and itwas calculated to be 18.0%.In this paper, we report the extent to which patientstolerate distortions in visual representations of force andposition in a virtual envir onment. We hypothesized thatforce and position JNDs could be extended when subjectswere given visual guidance that indicated that their forceand position were different from t heir actual force andposition. Because proprioceptive sensors are duller thanvisual sensors, small deviations between an actual position(assuming it is occluded) and the one displayed on acomputer screen are not perceived. If visual feedbackdistortions extend the force and position of JNDs, then wemay be able to extend the ability of stroke patients withouttheir conscious effort.2. Methods2.1. ApparatusWe used a commercially available robotic device calledPHANToM P remium 1.5 (Sensable Technologies, Cam-bridge, MA) to provide force feedback in virtual environ-ments. This machi ne has three actuated and three passivedegrees of


The Tolerance for Visual Feedback Distortions in a Virtual environment

Download The Tolerance for Visual Feedback Distortions in a Virtual environment
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view The Tolerance for Visual Feedback Distortions in a Virtual environment and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view The Tolerance for Visual Feedback Distortions in a Virtual environment 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?