DOC PREVIEW
UHCL CSCI 5931 - Gestures without Libraries Toolkits or Training

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

ABSTRACTINTRODUCTIONRELATED WORKTHE $1 GESTURE RECOGNIZERCharacterizing the ChallengeA Simple Four-Step AlgorithmStep 1: Resample the Point PathStep 2: Rotate Once Based on the “Indicative Angle”Step 3: Scale and TranslateStep 4: Find the Optimal Angle for the Best ScoreAn Analysis of Rotation InvarianceLimitations of the $1 RecognizerEVALUATIONMethodSubjectsApparatusProcedure: Capturing GesturesProcedure: Recognizer Testing Design and AnalysisResultsRecognition PerformanceEffect of Number of Templates / Training ExamplesEffect of Gesture Articulation SpeedScores Along the N-Best List Recognizer Execution SpeedDifferences Among Gestures and Subjective RatingsDiscussionRecognizers, Recorders, and Gesture Data SetFUTURE WORKCONCLUSIONREFERENCESAPPENDIX A – $1 GESTURE RECOGNIZERGestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes Jacob O. Wobbrock The Information School University of Washington Mary Gates Hall, Box 352840 Seattle, WA 98195-2840 [email protected] Andrew D. Wilson Microsoft Research One Microsoft Way Redmond, WA 98052 [email protected] Yang Li Computer Science & Engineering University of Washington The Allen Center, Box 352350 Seattle, WA 98195-2350 [email protected] ABSTRACT Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a “$1 recognizer” that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers’ N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing. ACM Categories & Subject Descriptors: H5.2. [Information interfaces and presentation]: User interfaces – Input devices and strategies. I5.2. [Pattern recognition]: Design methodology – Classifier design and evaluation. I5.5. [Pattern recognition]: Implementation – Interactive systems. General Terms: Algorithms, Design, Experimentation, Human Factors. Keywords: Gesture recognition, unistrokes, strokes, marks, symbols, recognition rates, statistical classifiers, Rubine, Dynamic Time Warping, user interfaces, rapid prototyping. Figure 1. Unistroke gestures useful for making selections, executing commands, or entering symbols. This set of 16 was used in our study of $1, DTW [18,28], and Rubine [23]. INTRODUCTION Pen, finger, and wand gestures are increasingly relevant to many new user interfaces for mobile, tablet, large display, and tabletop computers [2,5,7,10,16,31]. Even some desktop applications support mouse gestures. The Opera Web Browser, for example, uses mouse gestures to navigate and manage windows.1 As new computing platforms and new user interface concepts are explored, the opportunity for using gestures made by pens, fingers, wands, or other path-making instruments is likely to grow, and with it, interest from user interface designers and rapid prototypers in using gestures in their projects. However, along with the naturalness of gestures comes inherent ambiguity, making gesture recognition a topic of interest to experts in artificial intelligence (AI) and pattern matching. To date, designing and implementing gesture recognition largely has been the privilege of experts in these fields, not experts in human-computer interaction 1http://www.opera.com/products/desktop/mouse/ Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise,or republish, to post on servers or to redistribute to lists, requires priorspecific permission and/or a fee. UIST’07, October 7-10, 2007, Newport, Rhode Island, USA. Copyright 2007 ACM 978-1-59593-679-2/07/0010...$5.00. 159(HCI), whose primary concerns are usually not algorithmic, but interactive. This has perhaps limited the extent to which novice programmers, human factors specialists, and user interface prototypers have considered gesture recognition a viable addition to their projects, especially if they are doing the algorithmic work themselves. As an example, consider a sophomore computer science major with an interest in user interfaces. Although this student may be a capable programmer, it is unlikely that he has been immersed in Hidden Markov Models [1,3,25], neural networks [20], feature-based statistical classifiers [4,23], or dynamic programming [18,28] at this point in his career. In developing a user interface prototype, this student may wish to use Director, Flash, Visual Basic, JavaScript or a brand new tool rather than an industrial-strength environment suitable to production-level code. Without a gesture recognition library for these tools, the student’s options for adding gestures are rather limited. He can dig into pattern matching journals, try to devise an ad-hoc algorithm of his own [4,19,31], ask for considerable help, or simply choose not to have gestures. We are certainly not the first to note this issue in HCI. Prior work has attempted to provide gesture recognition for user interfaces through the use of libraries and toolkits [6,8,12, 17]. However, libraries and toolkits cannot help where they do not exist, and many of today’s rapid prototyping tools


View Full Document

UHCL CSCI 5931 - Gestures without Libraries Toolkits or Training

Documents in this Course
Load more
Download Gestures without Libraries Toolkits or Training
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Gestures without Libraries Toolkits or Training and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Gestures without Libraries Toolkits or Training 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?