UCF CAP 5937 - Assignment 3 – Symbol Recognizer

Unformatted text preview:

asgn3.pdfp329-rubine.pdfAssignment 3 – Symbol Recognizer CAP5937 Due: 10/08/07 11:59pm The focus of this third assignment is to learn the intricacies of creating a machine learning-based symbol recognizer. This is actually the first part of a two part assignment where you will be creating a simple pen-based calculator. In this assignment you are going to create a symbol recognition engine based on Rubine’s 1991 SIGGRAPH paper, “Specifying Gestures by Example”. Requirements Your symbol recognizer must be able to recognize the following symbols: 0,1,2,3,4,5,6,7,8,9,+,-,*,t,a,n,s,c,i, and the square root symbol. You should also be able to use your scribble gesture to erase symbols. You will also need to perform an experiment to evaluate your recognizer’s accuracy. The experiment will explore how the number of training samples used per symbol affects recognition. I would suggest testing your recognizer with 5, 10, 15, and 20 samples per symbol. For the test itself, I would write each symbol 5 or 10 times which should give you a good accuracy number. Please put the results of your experiment in the README file. Strategy To implement your symbol recognizer, read the Rubine paper. It is fairly straightforward once you understand the mathematics. Things you should keep in mind. 1. You need to find a way to invoke the recognizer. You can have it run in real time or in batch mode (for ex. lassoing the symbol or symbols and taping to invoke the recognizer). 2. Regardless of the invocation method, you will need some form of ink segmentation since you must be able to detect when a symbol has 2 or more strokes. Simple line segment intersection should suffice here since it is relatively easy to determine if you have a multi-stroke symbol in our alphabet. 3. Rubine’s algorithm uses matrices. So make use of the Matrix library found on the course webpage. 4. Rubine’s algorithm is designed to deal with only single stroke symbols. To recognize multi-stroke symbols, simply compute the features for each stroke and take the average.5. You will need to show recognition results to the user. A simple text box is fine but if you want to be more elaborate feel free to do so. Deliverables You must submit a zip file containing your source and any relevant files needed to compile and run your application. Also include a README file describing what works and what does not in your application, the results of your accuracy experiment, any known bugs, and any problems you encountered. Please include a file I can open in your application that has all of the symbols written down in ink. This will show me how you wrote your symbols for testing purposes. To submit, you can email me your zip file. Grading Grading will be loosely based on the following: 80% correct functionality 20% documentation@ @ Computer Graphics, Volume 25, Number 4, July 1991Specifying Gestures by ExampleDean RubineInformation Technology CenterCarnegie Mellon UniversityPittsburgh, PADean. Rubine @cs.cmu.eduAbstractGesture-based interfaces offer an alternative to traditionalkeyboard, menu, and direct manipulation interfaces.Theability to specify objects, an operation, and additional pa-rameters with a single intuitive gesture appeals to bothnovice and experienced users. Unfortunate y, gesture-basedinterfaces have not been extensively researched, partly be-cause they are difficult to create. This paper describesGRANDMA, a toolkit for rapidly adding gestures to di-rect manipulation interfaces. The trainable single-strokegesture recognizeused by GRANDMA is also described.Keywords —gesture, interaction techniques, user interfacetoolkits, statistical pattern recognition1 IntroductionGesture, as the term is used here, refers to hand markings,entered with a stylus or mouse, that indicate scope and com-mands [18]. Buxton gives the example of a proofreader’smark for moving text [ I]. A single stroke indicates the op-eration (move text), the operand (the text to be moved), andadditional parameters (the new location of the text). Theintuitiveness and power of this gesture hints at the greatpotential of gestural interfaces for improving input frompeople to machines, historically the bottleneck in human-computer interaction. Additional motivation for gesturalinput is given byRhyne [ 18] and Buxton [ 1],A variety of gesture-based applications have been cre-ated. Coleman implemented a texteditor based on proof-reader’s marks [3]. Minsky built a gestural interface to theLOGO programming language [ 13]. A group at IBM con-structed a spreadsheet application that combines gesture andhandwriting [18]. Buxton’s group produced a musical scorePermission to copy without fee all or part of (hismaterialisgrantedprovidedthatthecopies are not made or distributed for directcommercial advantage, the ACMcopyright notice and the title of thepublicamm and its date appear, and noticeis given that copying is bypmnission of theAssociation for Computing Machinery. To copyotherwise. or In republish, requires a fee and/or specific permission.editor that uses gestures for entering notes [2] and morerecently a graphical editor [9]. In these gesture-based ap-plications (and many others) the module that distinguishesbetween the gestures expected by the system, known as thegesturerecognize, is hand coded. This code is usuallycomplicated,making the systems (and the set of gesturesaccepted) difficult to create, maintain, and modify,Creating hand-coded recognizes is difficult. This is onereason why gestural input has not received greater atten-tion. This paper describes how gesture recognizes maybe created automatically from example gestures, removingthe need for hand coding. The recognition technology isincorporated into GRANDMA (Gesture Recognizes Au-tomated in a Novel Direct Manipulation Architecture), atoolkit that enables an implementor to create gestural inter-faces for applications with direct manipulation (“click-and-drag”) interfaces. In the current work, such applicationsmust themselves be built usingGRANDMA. Hopefully,this paper will stimulate the integration of gesture recogni-tion into otheruser interface construction tools.Very few tools have been built to aid development ofgesture-based applications.Artkit [7 ] provides architecturalsupport for gestural interfaces, but no support for creatingrecognizes. Existing trainable character recognizes, suchas those built from neural networks [61 or dictionary lookup[15],


View Full Document

UCF CAP 5937 - Assignment 3 – Symbol Recognizer

Documents in this Course
Load more
Download Assignment 3 – Symbol Recognizer
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Assignment 3 – Symbol Recognizer and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Assignment 3 – Symbol Recognizer 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?