DOC PREVIEW
UCF COT 4810 - IDA - A Cognitive Agent Architecture

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IDA: A Cognitive Agent Architecture1,Stan Franklin2 Arpad Kelemen2, and Lee McCauleyInstitute for Intelligent SystemsThe University of MemphisFor most of its four decades of existence artificial intelligence has devoted its attention primarily tostudying and emulating individual functions of intelligence. During the last decade, researchers haveexpanded their efforts to include systems modeling a number of cognitive functions (Albus, 1991, 1996;Ferguson, 1995; Hayes-Roth, 1995; Jackson, 1987; Johnson and Scanlon, 1987; Laird, Newall, andRosenbloom, 1987; Newell, 1990; Pollack, ????; Riegler, 1997; Sloman, 1995). There’s also been amovement in recent years towards producing systems situated within some environment (Akman, 1998;Brooks, 1990; Maes, 1990b). Some recent work of the first author and his colleagues have combined thesetwo trends by experimenting with cognitive agents (Bogner, Ramamurthy, and Franklin to appear; Franklinand Graesser forthcoming; McCauley and Franklin, to appear; Song and Franklin , forthcoming; Zhang,Franklin and Dasgupta, 1998; Zhang et al, 1998). This paper briefly describes the architecture of one suchagent.By an autonomous agent (Franklin and Graesser 1997) we mean a system situated in, and part of, anenvironment, which senses that environment, and acts on it, over time, in pursuit of its own agenda. It actsin such a way as to possibly influence what it senses at a later time. That is, the agent is structurally coupledto its environment (Maturana 1975, Maturana and Varela 1980). Biological examples of autonomous agentsinclude humans and most animals. Non-biological examples include some mobile robots, and variouscomputational agents, including artificial life agents, software agents and computer viruses. Here we’ll beconcerned with autonomous software agents ‘living’ in real world computing systems.Such autonomous software agents, when equipped with cognitive (interpreted broadly) features chosenfrom among multiple senses, perception, short and long term memory, attention, planning, reasoning,problem solving, learning, emotions, moods, attitudes, multiple drives, etc., will be called cognitive agents(Franklin 1997). Such agents promise to be more flexible, more adaptive, more human-like than anycurrently existing software because of their ability to learn, and to deal with novel input and unexpectedsituations. But, how do we design such agents?On way is to model them after humans. We’ve chosen to design and implement such cognitive agentswithin the constraints of the global workspace theory of consciousness, a psychological theory that gives ahigh-level, abstract account of human consciousness and broadly sketches it architecture (Baars, 1988,1997). We’ll call such agents “conscious” software agents.Global workspace theory postulates that human cognition is implemented by a multitude of relativelysmall, special purpose processes, almost always unconscious. (It's a multiagent system.) Coalitions of suchprocesses find their way into a global workspace (and into consciousness). This limited capacity workspace serves to broadcast the message of the coalition to all the unconscious processors, in order to recruitother processors to join in handling the current novel situation, or in solving the current problem. All thistakes place under the auspices of contexts: goal contexts, perceptual contexts, conceptual contexts, and/orcultural contexts. Each context is, itself a coalition of processes. There's much more to the theory, includingattention, learning, action selection, and problem solving. Conscious software agents should implement themajor parts of the theory, and should always stay within its constraints.IDA (Intelligent Distribution Agent), is to be such a conscious software agent developed for the Navy.At the end of each sailor’s tour of duty, he or she is assigned to a new billet. This assignment process is 1 With indispensable help from the other members of the Conscious Software Research Group includingAshraf Anwar, Miles Bogner, Scott Dodson, Art Graesser, Derek Harter, Aregahegn Negatu, UmaRamamurthy, Hongjun Song, and Zhaohua Zhang.2 Supported in part by ONR grant N00014-98-1-0332.Figure 1.called distribution. The Navy employs some 200 people, called detailers, full time to effect these newassignments. IDA’s task is to facilitate this process, by playing the role of detailer as best she can.Designing IDA presents both communication problems and constraint satisfaction problems. She mustcommunicate with sailors via email and in natural language, understanding the content. She must access anumber of databases, again understanding the content. She must see that the Navy’s needs are satisfied, forexample, the required number of sonar technicians on a destroyer with the required types of training. Shemust hold down moving costs. And, she must cater to the needs and desires of the sailor as well as ispossible.Here we’ll briefly describe a design for IDA including a high level architecture and the mechanisms bywhich it’s to be implemented. With the help of diagrams we’ll describe a preconscious version of IDA, andthen discuss the additional mechanisms needed to render her conscious.IDA will sense her world using three different sensory modalities. She’ll receive email messages,she’ll read database screens and, eventually, she’ll sense via operating system commands and messages.Each sensory mode will require at least one knowledge base and a workspace. The mechanism here will bebased losely on the Copycat Architecture (Hofstadter1995; Hofstadter and Mitchell 1994; Zhang et al1998). Each knowledge base will be a slipnet, a fluid semantic net. The workspace (working memory) willallow perception (comprehension), a constructive process. See the right side of Figure 1 for five such pairs.Each, other than the email, will understand material from a particular database, for example personnelBehavior Net FocusEmailWorkspaceEmailSlipnetDrivesPRDSlipnetPRDWorkspaceSPIRITWorkspaceSPRITSlipnetMember DataWorkspaceMember DataSlipnetAssociativeMemory (SDM)Intermediate TermMemory (case)Output / InputCompositionWorkspaceEmotionMechanismTemplateMemoryCodeletsPreconscious IDAArchitecture Solid arrow signifies data transferDotted arrow signifies potentialactivation of target—can occur withdata transferRequisitionWorkspaceRequisitionSlipnetOfferMemorySelectionModuleSelectionKnowledgeBaserecords, a list of job openings, a list of


View Full Document

UCF COT 4810 - IDA - A Cognitive Agent Architecture

Documents in this Course
Spoofing

Spoofing

25 pages

CAPTCHA

CAPTCHA

18 pages

Load more
Download IDA - A Cognitive Agent Architecture
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view IDA - A Cognitive Agent Architecture and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view IDA - A Cognitive Agent Architecture 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?