UT PSY 394U - Autonomous Robot Learning of Foundational Representations

Unformatted text preview:

1Autonomous Robot Learning ofFoundational RepresentationsBenjamin Kuipers(with Jefferson Provost and Joseph Modayil)University of Texas at Austin27 August 2007How does a baby (human or robot)get knowledge of its own?• The baby, assailed by eyes, ears, nose, skin,and entrails at once, feels it all as onegreat blooming, buzzing confusion …– [William James, 1890]Developmental Robotics• A variety of baby robots have been createdto study foundational learning.– The RobotCub project: iCub– Brian Scazellatti’s Nico, at Yale– Minoru Asada, et al, CB2, Osaka U.• The mechanical engineering is impressive,but the real difficulty is the learning.– Its knowledge representation must be learned,not programmed.BabyRobotsOur Gedankenexperiment• Imagine a baby robot, a learning agent, bornwith uninterpreted sensors and effectors• It has only pixel-level experience:– Disorganized collection of sensor elements– Incremental motor signals• How does it learn object-level concepts?– Places, paths, objects, actions, etc.– The macro-scale components of adult knowledgeThe Gedankenexperiment (2)• In biological reality, some knowledge isinnate to the individual.– Innate knowledge is learned by the speciesover evolutionary time.• Breadth-first search– We pretend that it is learned by the individual• Depth-first search• The gedanken experiment helps illuminatethe knowledge and how it can be learned.2The Gedankenexperiment (3)• Current research strategy:– Devise ways to learn foundational concepts– using only general-purpose statistical learningmethods.• Future:– Define a toolkit of statistical learning methods.– Generate all learnable concepts.– Search for the most productive concepts.An Ontology is the Foundationfor Knowledge Representation• An ontology specifies– the categories that individuals can belong to• (the sets variables can be quantified over), and– the relations that can be defined over thosecategories.• Axioms embody the content of knowledge.• The ontology is the language of objects andrelations for expressing the axioms.– (Some authors include axioms in the ontology.)The Problem ofLearning New Ontology• How does a learning agent (robot or baby) get– from a pixel ontology of low-level sensation– to an object ontology of high-level concepts?• How can an agent possibly learn to representnew types of things?One Way to Abstract Experience• Discrete states and places are abstractedfrom patterns of continuous behavior.Explaining Sensor Readings• Space is the minimal explanation forsimilarities among sense values.– Correlations among pixel values gives thestructure of sensory arrays.– How pixels change in response to motor signalsgives the structure of the motor system.• Pierce & Kuipers [AIJ, 1997]– See also Philipona, et al [2003a,b] andOlsson, Nehaniv & Polani [2006].Life-Long Learning• Concepts of space, objects, and actions actas a constant foundation for knowledge.• They are grounded in sensory and motorinteraction with the environment.• Senses and motors change over a lifetime.• The grounding of these concepts mustadapt.3Lassie “sees”the worldwith a LaserRangefinder• 180 ranges over180° planar fieldof view• About 13” abovethe ground plane• 10-12 scans persecondLaser Rangefinder Image• 180 narrow beams at 1º intervals.Disorganized Sensor: 180 “Pixels”Structured Sensor ArrayThe Egocentric Range ImageThe World-Centered Range Image4The World-Centered Range ImageOccupancy GridStatistical Learning Methods Used• Correlation (time-series and histograms)• Agglomerative clustering• Multidimensional scaling• Dimensionality reduction (PCA, Isomap)• Sensory flow• Image matching (ICP)• Markov localization (max likelihood pose)• . . .Objects as Explanations• A static world explains most observations.– So focus on the discrepancies• Cluster in space; Track over time• Gather observations to make shape models• Modayil & Kuipers [2004, 2006, 2007].Identify Dynamic Sensor ReturnsClustering into Objects5Track Objects over TimeDescribe the Scene• Describe thescene in terms of:– Static world– Robot’s own pose– Object in a fixedposition– Object andtrajectory• Individual objectsLearningObject Shapes• Merge range scansto get shape models• Cluster shapes toget object categoriesLearning Object Categories• Clustering shapes by perceptual featuresLearn about Actions• Learn actions to affect objects. Learn:– Qualitative description of effect– Bounds on prerequisite state– Control law to perform the action• For a mobile robot that can move and push:– Move to desired point in nearby space.– Turn to face object.– Push (Move, to get object to move also)Learn about Actions• Learn their properties. Learn to plan.6Summary: Concept Learning• Space is learned as a minimal explanation forsensory correlations.• Objects are learned as a minimal explanationfor discrepancies from fixed-value model• Actions are learned as minimal descriptionsof motions interacting with objects• Plans combine actions to achieve goals.Roadmap• Introduction and Overview• Learning from uninterpreted experience– Pierce & Kuipers, 1997• Abstraction of views, actions, and states– Provost, Kuipers & Miikkulainen, 2006• Learning objects and actions– Modayil & Kuipers, 2004, 2006,


View Full Document

UT PSY 394U - Autonomous Robot Learning of Foundational Representations

Documents in this Course
Roadmap

Roadmap

6 pages

Load more
Download Autonomous Robot Learning of Foundational Representations
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Autonomous Robot Learning of Foundational Representations and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Autonomous Robot Learning of Foundational Representations 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?