UCI P 140C - COMPUTATIONAL COGNITIVE SCIENCE

Unformatted text preview:

COMPUTATIONAL COGNITIVE SCIENCECognitive RevolutionArtificial IntelligenceWeak vs. Strong AITuring TestA classic conversational agentExample of modern conversational agentsFuture of AIComputational ModelingWhy do we need computational models?Neural NetworksSlide 12Slide 13Idealized neurons (units)Slide 15An example calculation for a single (artificial) neuronSlide 17Slide 18Multi-layered NetworksSlide 20A classic neural network: NETtalkDifferent ways to represent information with neural networks: localist representationDistributed Representations (aka Coarse Coding)Suppose we lost unit 6Slide 25Advantage of Distributed RepresentationsNeural Network ModelsRecent Neural Network Research (since 2006)Samples generated by network by propagation activation from label nodes downwards to input nodes (e.g. pixels)Examples of correctly recognized handwritten digits that the neural network had never seen beforeOther Demos & ToolsCOMPUTATIONAL COGNITIVE SCIENCECognitive Revolution•Development of computer led to rise of cognitive psychology and artificial intelligenceBINAC: the Binary Automatic Computer, developed in 1949Artificial Intelligence•Constructing artificial computer-based systems that produce intelligent outcomes•Examples–Game playing programs•Deep blue–Intelligent robots•Mars rovers•Darpa’s urban challenge–Netflix competition–Conversational agentsWeak vs. Strong AI•Weak AI — using AI as a tool to understand human cognition•Strong AI — a properly programmed computer has a “mind” capable of understandingTuring Test•Can artificial intelligence be as good as human intelligence? How can we test this?•Turing test (1950) –designed to test whether humans can distinguish between humans and computers based on conversations–A human interrogator could ask a respondent (either a computer or a human, whose identity was hidden) any question he or she wished, and based on either the computer’s or the human’s response, the interrogator had to decide if the answer was given by the computer or by the human.Alan Turing (1912-1954)A classic conversational agent•The Turing Test inspired an early, satirical, attempt to create a computerized Rogerian therapist, “Eliza”:Eliza’s program is a set of “If…Then” rules:– “IF person mentions ‘father’ THEN say ‘Who else in your family comes to mind when you think about this?’– “IF person writes ‘I remember X’ THEN say ‘Does it make you feel happy to recall X?’ELIZA has no intelligence itself. Intelligence comes from people interpreting its statements.Example of modern conversational agents [following is a fragment where A=your instructor; B = very good artificial agent] A: are you human?B: What do you think? I am a Supreme Ruler.A: seriouslyB: I am always serious.A: who was Turing?B: A brilliant and tragic figure in the history of computer science.A: Do you think the Turing test is a good one?B: Some of them can. A: are you having a good day?B: I am having fun.A: How do you know you are having fun?B: By introspection. http://testing.turinghub.com/Future of AI•Computer chip capacity and processing speed are increasing exponentially•Some theorists (e.g. Ray Kurzweil) believe this will lead to a technological singularity along with dramatic improvements in AIComputational Modeling•Most modeling in cognitive science targets natural intelligence•Goal is to develop model or mimic some aspects of human cognitive functioning–produce the same errors as humans  Simulations of aspects of human behaviourWhy do we need computational models?•Makes vague verbal terms specific–Provides precision needed to specify complex theories. •Provides explanations•Obtain quantitative predictions –just as meteorologists use computer models to predict tomorrow’s weather, the goal of modeling human behavior is to predict performance in novel settingsNeural NetworksNeural Networks•Alternative to traditional information processing models •Also known as: –PDP (parallel distributed processing approach)–Connectionist modelsDavid Rumelhart Jay McClellandNeural Networks•Neural networks are networks of simple processors that operate simultaneously•Some biological plausibilityIdealized neurons (units)OutputProcessorInputsAbstract, simplified description of a neuronNeural Networks•Units– Activation = Activity of unit– Weight = Strength of the connection between two units•Learning = changing strength of connections between units•Excitatory and inhibitory connections–correspond to positive and negative weights respectivelyAn example calculation for a single (artificial) neuronDiagram showing how the inputs from a number of units are combined to determine the overall input to unit-i. Unit-i has a threshold of 1; so if its net input exceeds 1 then it will respond with 1, but if the net input is less than 1 then it will respond with –1final outputWhat would happen if we change the input J3 from +1 to -1?a)output changes to -1b)output stays at +1c)do not knowWhat would happen if we change the input J4 from +1 to -1?a)output changes to -1b)output stays at +1c)do not knowfinal outputIf we want a positive correlation between the output and input J3, how should we change the weight for J3?a)make it negativeb)make it positivec)do not know final outputMulti-layered Networks•Activation flows from a layer of input units through a set of hidden units to output units•Weights determine how input patterns are mapped to output patternshidden unitsinput unitsoutput unitsMulti-layered Networks•Network can learn to associate output patterns with input patterns by adjusting weights•Hidden units tend to develop internal representations of the input-output associations•Backpropagation is a common weight-adjustment algorithmhidden unitsinput unitsoutput unitsA classic neural network: NETtalk7 groups of 29 input units26 output units80 hidden units_ a _ c a t _7 letters of text input(after Hinton, 1989)target letterteacher/k/target outputnetwork learns to pronounce English words: i.e., learns spelling to sound relationships. Listen to this audio demo.Different ways to represent information with neural networks: localist representationconcept 1concept 2concept 3Each unit represents just one item  “grandmother” cells1 0 0 0 0 00 0 0 1 0 00 1 0 0 0 0Unit 1Unit 2Unit 3Unit 4Unit 5(activations of units; 0=off 1=on)Unit 6Distributed Representations (aka Coarse


View Full Document

UCI P 140C - COMPUTATIONAL COGNITIVE SCIENCE

Download COMPUTATIONAL COGNITIVE SCIENCE
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view COMPUTATIONAL COGNITIVE SCIENCE and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view COMPUTATIONAL COGNITIVE SCIENCE 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?