DOC PREVIEW
UT CS 343 - Philosophical Arguments Against Strong AI

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Philosophical ArgumentsAgainst “Strong” AI2Strong vs. Weak AI• “Weak” AI just claims the digital computer is a useful tool forstudying intelligence and developing useful technology.A running AI program is at most a simulation of a cognitiveprocess but is not itself a cognitive process. Analogously, ameteorlogical computer simulation of a hurricane is not ahurricane.• “Strong” AI claims that a digital computer can in principle beprogrammed to actually BE a mind, to be intelligent, tounderstand, perceive, have beliefs, and exhibit othercognitive states normally ascribed to human beings.3Searle’s Chinese Room• Imagine an English speaking human-being who knows noChinese is put in a room and asked to simulate theexecution of a computer program operating on Chinesecharacters which he/she does not understand.• Imagine the program the person is executing is an AIprogram which is receiving natural language stories andquestions in Chinese and responds appropriately withwritten Chinese sentences.• The claim is that even if reasonable natural languageresponses are being generated that are indistinguishablefrom ones a native Chinese speaker would generate, thereis no “understanding” since only meaningless symbols arebeing manipulated.4The Turing Test• If the response of a computer to an unrestricted textualnatural-language conversation cannot be distinguishedfrom that of a human being then it can be said to be intelligent.• Searle doesn’t directly question whether a computer couldpass the Turing test. Rather, he claims that even if it did, itwould not exhibit “understanding.”?Hi! Are you a computer?No. My name is Mary.Are you kidding, I’m Hal and Ican’t even multiply two-digitnumbers!5Responses to Searle• The Systems Reply: The person doesn’t understandChinese but the whole system of the program, room, plusperson understands Chinese.• The Robot Reply: If you give the computer a robotic bodyand sensors through which it interacts with the world in thesame way as a person then it would understand.• The Brain Simulator Reply: If the program was actuallysimulating the firing of all the neurons in a human brain thenit would understand.• The Combination Reply: If the program was simulating ahuman brain AND had a robotic body and sensors then itwould understand.• The Other Minds Reply: If there is no understanding in theroom then how do ever know that anyone ever understandsanything.• The Many Mansions Reply: Maybe a digital computer won’twork but you can build an artificial intelligence usingdifferent devices more like neurons that will understand.6Systems Reply• Searle’s response is to let the person internalize the entiresystem memorizing the program and all intermediateresults.• Assuming this were somehow actually possible, then theperson would arguably contain two minds, one whichunderstood English and one that understood Chinese. Thefact that the English mind doesn’t “understand” the Chinesemind seems to obscure the understanding of the Chinesemind itself.• According to Searle, the Chinese room lacks the “causalpowers of the brain” and therefore cannot understand. Whydoesn’t the room or silicon chips have such “causalpowers.” How would we know whether the “brains” of anintelligent alien species have such “causal powers.” Searleclaims this is an “empirical question” but gives noexperimental procedure for determining it.7Robot Reply• Searle’s response is that even if the symbols entering theroom come from television cameras and other sensors andthe outputs control motors, the basic lack of“understanding” doesn’t change.• Some AI researchers still believe it is important to havesymbols “grounded” in actual experience with the physicalworld in order for them to have “meaning.”• In any case, it would probably be extremely difficult to writea program with all the knowledge of the physical worldnecessary to past the Turing test without having learned itfrom actual interaction with the world.8Brain Simulator Reply• Searle’s repsonse is that even a formal simulation of all theproperties of the brain wouldn’t have the “causal properties”of the brain that allow for intentionality and “understanding.”• Therefore, if each of your neurons were incrementallyreplaced with silicon circuits that replicated their I/Obehavior, your observable behavior would not change but,according to Searle, at some point you would stop actually“understanding” anything.9Other Minds Reply• Searle’s response is that of course anyone can be fooledinto attributing “understanding” when there actually is none,but that does not change the fact that no real understandingis taking place..• However, there then seems to be no empirical test thatcould actually decide whether “understanding” is takingplace or not and solipsism is the only truly reliable recourse.10Many Mansions Reply• Searle’s response is that strong AI is committed to the useof digital computers and that he has no argument againstintelligence based on potential alternative physical systemsthat possess “causal processes.”• Searle is not a dualist in the traditional sense and grantsthat the mind is based on physical processes, just that acomputer program does not possess the proper physicalprocesses.• He claims that, if anything, proponents of strong AI believein a kind of dualism since they believe the critical aspect ofmind is in non-physical software rather than in physicalhardware.11The Emperor’s New Mind(and other fables)• Roger Penrose, a distinguished Oxford mathematician andphysicist, has recently published a couple of books criticalof strong AI (The Emporer’s New Mind, Shadows of theMind)• His basic argument is that Gödel’s IncompletenessTheorem provides strong evidence against strong AI.• Unlike Searle, he is unwilling to grant the possibility that acomputer could actually ever pass the Turing test since hebelieves this would require abilities that are uncomputable.• However he is also not a dualist and believes that thebehavior of the brain is actually physically determined.• Since current theory in physics is either computable or non-deterministic (truly random) he believes that a new physicsneeds to be developed that unifies quantum mechanics andgeneral relativity (quantum gravity theory) that isdeterministic but noncomputable.12Gödel’s Theorem• Godel’s theorem states


View Full Document

UT CS 343 - Philosophical Arguments Against Strong AI

Download Philosophical Arguments Against Strong AI
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Philosophical Arguments Against Strong AI and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Philosophical Arguments Against Strong AI 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?