DOC PREVIEW
UCI P 140C - COGNITIVE NEUROSCIENCE

This preview shows page 1-2-22-23 out of 23 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

COGNITIVE NEUROSCIENCENoteCognitive NeuroscienceTechniques for Studying Brain FunctioningThe spatial and temporal ranges of some techniques used to study brain functioning.Single Cell Recording (usually in animal studies)Hubel and Wiesel (1962)COMPUTATIONAL COGNITIVE SCIENCEComputer ModelsWhy do we need computational models?Neural NetworksIdealized neurons (units)Different ways to represent information with neural networks: localist representationCoarse Coding/ Distributed RepresentationsAdvantage of Distributed RepresentationsSuppose we lost unit 6An example calculation for a single neuronNeural-Network ModelsMulti-layered NetworksExample of Learning NetworksAnother example: NETtalkOther demosNeural Network ModelsCOGNITIVE NEUROSCIENCENote•Please read book to review major brain structures and their functions•Please read book to review brain imaging techniques•See also additional slides available on class websiteCognitive Neuroscience•the study of the relation between cognitive processes and brain activities•Potential to measure some “hidden” processes that are part of cognitive theories (e.g. memory activation, attention, “insight”)•Measuring when and where activity is happening. Different techniques have different strengths: tradeoff between spatial and temporal resolutionTechniques for Studying Brain Functioning•Single unit recordings–Hubel and Wiesel (1962, 1979)•Event-related potentials (ERPs)•Positron emission tomography (PET)•Magnetic resonance imaging (MRI and fMRI)•Magneto-encephalography (MEG)•Transcranial magnetic stimulation (TMS)The spatial and temporal ranges of some techniques used to study brain functioning.Single Cell Recording(usually in animal studies)Measure neural activity with probes. E.g., research byHubel and Wiesel:Hubel and Wiesel (1962)•Studied LGN and primary visual cortex in the cat. Found cells with different receptive fields – different ways of responding to light in certain areasLGN On cell (shown on left)LGN Off cellDirectional cellAction potential frequency of a cell associated with a specific receptive field in a monkey's field of vision. The frequency increases as a light stimulus is brought closer to the receptive field.COMPUTATIONAL COGNITIVE SCIENCEComputer Models•Artificial intelligence–Constructing computer systems that produce intelligent outcomes•Computational modeling–Programming computers to model or mimic some aspects of human cognitive functioning. Modeling natural intelligence.  Simulations of behaviorWhy do we need computational models?•Provides precision need to specify complex theories. Makes vague verbal terms specific •Provides explanations•Obtain quantitative predictions –just as meteorologists use computer models to predict tomorrow’s weather, the goal of modeling human behavior is to predict performance in novel settingsNeural Networks•Alternative to traditional information processing models –Also known as: PDP (parallel distributed processing approach) and Connectionist models•Neural networks are networks of simple processors that operate simultaneously•Some biological plausibilityIdealized neurons (units)OutputProcessorInputsAbstract, simplified description of a neuronDifferent ways to represent information with neural networks: localist representationconcept 1concept 2concept 3Each unit represents just one item  “grandmother” cells1 0 0 0 0 00 0 0 1 0 00 1 0 0 0 0Unit 1Unit 2Unit 3Unit 4Unit 5(activations of units; 0=off 1=on)Unit 6Coarse Coding/ Distributed Representationsconcept 1concept 2concept 31 1 1 0 0 01 0 1 1 0 10 1 0 1 0 1(activations of units; 0=off 1=on)Each unit is involved in the representation of multiple itemsUnit 1Unit 2Unit 3Unit 4Unit 5Unit 6Advantage of Distributed Representations•Efficiency –Solve the combinatorial explosion problem: With n binary units, 2n different representations possible. (e.g.) How many English words from a combination of 26 alphabet letters? •Damage resistance–Even if some units do not work, information is still preserved – because information is distributed across a network, performance degrades gradually as function of damage –(aka: robustness, fault-tolerance, graceful degradation)Suppose we lost unit 6concept 1concept 2concept 31 1 1 0 0 01 0 1 1 0 10 1 0 1 0 1(activations of units; 0=off 1=on)Can the three concepts still be discriminated?Unit 1Unit 2Unit 3Unit 4Unit 5Unit 6An example calculation for a single neuronDiagram showing how the inputs from a number of units are combined to determine the overall input to unit-i. Unit-i has a threshold of 1; so if its net input exceeds 1 then it will respond with 1, but if the net input is less than 1 then it will respond with –1Neural-Network ModelsThe simplest models include three layers of units:(1) The input layer is a set of units that receives stimulation from the external environment. (2) The units in the input layer are connected to units in a hidden layer, so named because these units have no direct contact with the environment. (3) The units in the hidden layer in turn are connected to those in the output layer.Multi-layered Networks•Activation flows from a layer of input units through a set of hidden units to output units•Weights determine how input patterns are mapped to output patterns•Network can learn to associate output patterns with input patterns by adjusting weights•Hidden units tend to develop internal representations of the input-output associations•Backpropagation is a common weight-adjustment algorithmhidden unitsinput unitsoutput unitsExample of Learning Networks•http://www.cs.ubc.ca/labs/lci/CIspace/Version3/neural/index.htmlAnother example: NETtalk7 groups of 29 input units26 output units80 hidden units_ a _ c a t _7 letters of text input(after Hinton, 1989)target letterteacher/k/target outputConnectionist network learns to pronounce English words: i.e., learns spelling to sound relationships. Listen to this audio demo.Other demosHopfield networkhttp://www.cbu.edu/~pong/ai/hopfield/hopfieldapplet.htmlBackpropagation algorithm and competitive learning:http://www.cs.ubc.ca/labs/lci/CIspace/Version4/neural/http://www.psychology.mcmaster.ca/4i03/demos/demos.htmlCompetitive learning:http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.htmlVarious networks:http://diwww.epfl.ch/mantra/tutorial/english/Optical character


View Full Document

UCI P 140C - COGNITIVE NEUROSCIENCE

Download COGNITIVE NEUROSCIENCE
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view COGNITIVE NEUROSCIENCE and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view COGNITIVE NEUROSCIENCE 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?