DOC PREVIEW
Consciousness

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Page 1Page 2Page 3Page 4Page 5Page 6Page 7Page 8Page 9Page 10Page 11Page 12Page 13Page 14August 31, 1994Consciousness: Perspectives from Symbolic and Connectionist AIWilliam BechtelProgram in Philosophy, Neuroscience, and PsychologyDepartment of PhilosophyWashington University in St. Louis1. Computational Models of ConsciousnessFor many people, consciousness is one of the defining characteristics of mental states.Thus, it is quite surprising that consciousness has, until quite recently, had very little role to playin the cognitive sciences. Three very popular multi-authored overviews of cognitive science,Stillings et al. [33], Posner [26], and Osherson et al. [25], do not have a single reference toconsciousness in their indexes. One reason this seems surprising is that the cognitive revolutionwas, in large part, a repudiation of behaviorism's proscription against appealing to inner mentalevents. When researchers turned to consider inner mental events, one might have expected themto turn to conscious states of mind. But in fact the appeals were to postulated inner events ofinformation processing. The model for many researchers of such information processing is thekind of transformation of symbolic structures that occurs in a digital computer. By positingprocedures for performing such transformation of incoming information, cognitive scientists couldhope to account for the performance of cognitive agents. Artificial intelligence, as a centraldiscipline of cognitive science, has seemed to impose some of the toughest tests on the ability todevelop information processing accounts of cognition: it required its researchers to developrunning programs whose performance one could compare with that of our usual standard forcognitive agents, human beings. As a result of this focus, for AI researchers to succeed, at leastin their primary task, they did not need to attend to consciousness; they simply had to designprograms that behaved appropriately (no small task in itself!).This is not to say that conscious was totally ignored by artificial intelligence researchers.Some aspect of our conscious experience seemed critical to the success of any informationprocessing model. For example, conscious agents exhibit selective attention. Some informationreceived through their senses is attended to; much else is ignored. What is attended to varies withthe task being performed: when one is busy with a conceptual problem, one may not hear thesounds of one's air conditioner going on and off, but if one is engaged in repairing the controls onthe air conditioning system, one may be very attentive to these sounds. In order for AI systemsto function in the real world (especially if they are embodied in robots) it is necessary to controlattention, and a fair amount of AI research has been devoted to this topic. Moreover, conscious thought seems to be linear: we are not aware of having multiplethoughts at once, but rather of one thought succeeding another. Typically other processing isoccurring at the same time, but it is not brought into the linear sequence of consciousness. Thisis suggestive of the role of an executive in some AI systems, which is responsible for coordinatingand directing the flow of information needed for action. Johnson-Laird [19], who argues for theneed for much of the computation underlying behavior to proceed in parallel, proposes thatconsciousness arises with a high level processing system that coordinates lower level processes.Finally, conscious states are states people have access to, can report on to others, and can rely onin conducting their own actions. Capturing an aspect of this has turned out to be particularlyConsciousness: Perspectives from Symbolic and Connectionist AI Page 2important in a variety of computer programs, especially those we rely on to make or advise aboutdecisions. We want to query them about their decisions or recommendations so as to evaluatewhether they were reasonable. But Johnson-Laird points out that humans go much further: wecan use information about our own states to guide our actions. He therefore proposes that whatis needed for cognitive systems to be conscious is that they possess a recursive ability to model(albeit partially and incompletely) what is going on in themselves, and to use such models incontrolling future activity (mental and physical). The strategy employed by Johnson-Laird (himself a psychologist, albeit one with a strongcomputational perspective) and those AI researchers who have taken consciousness seriously hasbeen to focus on the functions of conscious: how is it that our information processing isinfluenced by being conscious. They then try to capture these functional elements in eitherschematic designs or actual AI systems. Until recently most theorists who took up the questionof how an AI system might exhibit consciousness operated within the framework of symbolic AI.A symbolic AI system is one in which explicit symbol structures within the computer representpieces of information and the system employs rules to transform these rules. Symbolic AI hasbeen quite successful in modeling a variety of cognitive activities but has also exhibited some clearlimitations. Those frustrated with the problems of working in traditional AI have recently broughtabout a renaissance in another form of AI modeling, one that is not grounded on performingformal operations on complex representations. Drawing their inspiration from the basicconception of how a brain works, these theorists develop models using processing units which cantake on activations and, as a result, excite or inhibit other units. Systems of such processing unitsare referred to as connectionist or neural network systems. (For introductions to connectionistmodeling, see Rumelhart, McClelland, and the PDP Research Group [29] and Bechtel andAbrahamsen [2].) As we shall see below, connectionism provides quite different resources andchallenges for someone seeking to explain consciousness. But for some connectionists, theenterprise is much like that for practitioners of more traditional AI: identify functional featuresof consciousness and show how they might be explicated using a connectionist model.One connectionist who has approached consciousness in this way is Paul Churchland [5].Churchland identifies the following features of consciousness:1) Consciousness involves short-term memory.2) Consciousness is independent of sensory inputs.3)


Consciousness

Download Consciousness
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Consciousness and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Consciousness 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?