New version page

Learning Dialogues

Upgrade to remove ads

This preview shows page 1 out of 2 pages.

View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Collaboration in Peer Learning DialoguesCynthia Kersey and Barbara Di EugenioComputer Science, University of Illinois at Chicago, USA{ckerse2,bdieugen}@uic.eduPamela Jordan and Sandra KatzLearning Research and Development Center, University of Pittsburgh, USA{pjordan,katz+}pitt.eduAbstractOur project seeks to enhance understand-ing of collaboration in peer learning dia-logues, to develop computational modelsof peer collaborations, and to create an ar-tificial agent, KSC-PaL, that can collab-orate with a human peer via natural lan-guage dialogue. We present some initialresults from our analysis of this type of di-alogues.1 IntroductionPeer tutoring and collaboration strongly promotelearning (Cohen et al., 1982; Rekrut, 1992; vanBoxtel et al., 2000); however, there are no mod-els of collaboration in dialogue that can fully ex-plain why collaboration between peers engenderslearning for all the peers involved more than otherlearning situations, even when one peer is more“expert” than the other. There is general consen-sus that working together encourages students togenerate new ideas that would probably not oc-cur to them if working alone; mechanisms thatsupport such exchanges include co-construction(Hausmann et al., 2004) and knowledge sharing(Soller, 2004). We will refer to all these mecha-nisms as KSC, or “Knowledge Sharing and Con-struction”. To contribute to an increased under-standing of peer learning, we have started to applyour balance-propose-dispose model of negotiation(Di Eugenio et al., 2000) to this type of learn-ing dialogues. In that model, partners first bal-ance their knowledge distributions, then proposea possible next step and lastly decide to commit toa proposal or postpone it in order to further bal-ance the knowledge needed for problem solving.We expect this model will be affected by (a) theknowledge distribution, (b) a collaborator’s esti-mates of what types of knowledge the partner has,(c) decisions on what knowledge to share and (d)the detection of proposals and of problem solvingor collaboration impasses. The initial model wasbased on the Coconut dialogues, collected in a set-ting where the task was simple (furnishing a tworoom apartment) and knowledge was equally dis-tributed. Our new domain is the fundamentals ofdata structures and algorithms in Computer Sci-ence, and the task is finding conceptual mistakesin simple code. Not only is knowledge much morecomplex, but it is of different kinds – e.g., onecollaborator may know (more) about null pointersand the other about loops.In this poster, we briefly outline some prelimi-nary results from our data collection.2 Collaborating on Data StructuresTasksWe have developed a set of data structures tasksfor peers to solve and pre/post tests to measurewhether the interaction is beneficial (a beneficialcollaboration is one in which at least one studentlearns); we pilot tested both in a face to face set-ting; we then proceeded to collect data in a com-puter mediated environment. The specific task isdebugging or explaining easy routines for funda-mental data structures such as linked lists, stacksand binary search trees. We are interested in con-ceptual, not syntactic mistakes, and we inform oursubjects of this.We have chosen a computer mediated environ-ment to more closely mimic the situation a studentwill have to face when interacting with KSC-Pal,the artificial peer agent we intend to develop basedon our balance-propose-dispose model and ourempirical findings. In addition, in (Di Eugenio et14:01:56 C: unless the "first" is just a dummy node14:02:20 D: i don’t think so because it isn’t depictedas a node in the diagram14:02:28 C: OK14:03:13 C: so you would draw something like...14:03:24 D: i believe it will make the list go like this:bat, ant, cat14:03:40 C: draw: add pointer second (n100)14:03:44 C: draw: move n10014:03:46 C: draw: link n100 to14:03:47 C: draw: link n100 to n002Figure 1: An excerpt from one of our dialoguesal., 2000), we had shown that such a setting affectsthe length of turns and turn taking, but does notchange the nature of collaboration. Our computer-mediated environment supports typed natural lan-guage dialogue, task-specific drawing tools andmenu-based code mark-up. These features werebased in part on observations on the face to faceinteractions: the peers frequently drew data struc-tures and deictically referred to the code they werediagnosing or explaining. In addition they collab-oratively marked up the code under discussion.We have collected dialogues using the computermediated interface for 12 pairs thus far. Each dyadwas presented with 5 exercises and all but twosolved all 5 exercises. Figure 1 shows a shortexcerpt from one dialogue. Note that it includesdrawing actions in addition to verbal exchanges.These dialogues differ from the face-to-face di-alogues collected in the pilot study in that thedyads appear to be more focused when using thecomputer-mediated environment. There is only asmall amount of off-topic chat compared with theface-to-face dialogues. Also, there is less hedg-ing and hesitation in making problem-solving sug-gestions. The drawing appeared to be more pur-poseful as well, although this could be the re-sult of the constraints of the drawing tool insteadof the environment itself. Interestingly for ourbalance-propose-dispose model, proposals can beconveyed by drawing, as in Figure 1. C. an-nounces he will propose a solution at 14:03:13,and then proceeds to draw it starting at 14:03:40.We have observed at least 5 instances in which aproblem solving proposal was made by drawingin our dialogues. In addition, the drawing toolwasn’t consistently used by the dyads. We haveanalyzed in more detail 18 of the debugging di-alogues, i.e., 9 dyads each solving exercise 3 onlinked lists and 4 on stacks. 7 dyads (78%) drewsomething for problem 3, but only 4 dyads (44%)did for problem 4; two of the four dyads use thetool just once to place a single object on the screen.This could be related to the nature of the prob-lem since exercise 3 involved linked-lists whichare generally believed to be more confusing thanstacks.Another interesting observation was that 8 ofthe 18 dialogues do not appear to follow a recur-sive, stack-based dialogue structure (Ros´e et al.,1995). In these 8 dialogues, the dyads separatelyidentify the errors in the programs and then re-turn later to discuss and correct them. However,the topics were not revisited according to recencyof mention but by

Download Learning Dialogues
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...

Join to view Learning Dialogues and access 3M+ class-specific study document.

We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Learning Dialogues 2 2 and access 3M+ class-specific study document.


By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?