PSYCH 240: QUIZ #6
46 Cards in this Set
Front | Back |
---|---|
Episodic Memory
|
-Memory of personal episodes (from a 1st person perspective)
-Tied to a specific time and place
-Personal perspective
-i.e. First kiss, walking to class this morning
|
Semantic Memory
|
-General knowledge, memory of facts
-Not tied to specific time or place
-i.e. Pittsburgh is in Pennsylvania, 2+2 = 4
|
Categorization
|
-Allows inferences about members of a class (that we might not otherwise be able to make)
-i.e. We know that something is a butterfly (but have never encountered this specific butterfly before) -> there are a lot of things we can infer (i.e. came from a cocoon, living thing, insect which…
|
Categorization by Pigeons (Wasserman et al., 1987)
|
-Trained pigeons to look at a particular picture; 4 holes surrounding the picture; rewarded for pecking in the right hole (i.e. pic of rose -> peck flower hole)
-Pigeons peck one of four keys depending on stimulus
-Results: 81% accurate with old exemplars (could explain by stimulus-resp…
|
Categorization by Conceptual Knowledge (Gelman and Markman, 1983)
|
-Categories are not always similarity based
-i.e. Hawk is a bird but visually similar to a bat
-Child told that flamingo is a bird and feeds its young mashed up food v. bat which feeds its young milk -> what does the black bird feed its young?
-Children age 4 say bird feeds mashed food…
|
Classical View of Categories
|
-Defining properties: necessary and sufficient to determine whether something is a member of a category
-i.e. Bachelor = unmarried adult male
-Many scientific classification systems are based on defining properties
-But our brains may not always categorize things by such rules
|
Problems with Classical View of Categories
|
-Some things do not have necessary and sufficient conditions: i.e. Game -> What makes something a game? Have to be able to capture things as diverse as basketball and chess
-Problems with exceptions: i.e. Is a monk a bachelor? (adult, unmarried male); Is a game-show 'reality TV'?
|
Modern Probabilistic View of Categories
|
-Psychologically, properties/features are characteristic, not defining (not guaranteed to be present even though characteristic)
-Something belongs to a category if it is similar to members of that category (categorize not based on rules, but on similarity)
-Some members have more chara…
|
Evidence for Typicality/Fuzzy Categories: Ratings
|
-Exemplars with more characteristic properties are rated as being more typical of a category
-i.e. Order from most to least typical: apple, banana, pineapple, fig, olive
-How can we all agree on that if there are defining properties (i.e. in the category or out)?
-Instead, it seems tha…
|
Evidence for Typicality: Sentence Verification
|
-Give an a sentence, respond if it is true or false
-i.e. Judge each of the following: "A robin is a bird" v. "A chicken is a bird"; "Tennis is a sport" v. "Curling is a sport"
-People are faster to verify more typical exemplars than less typical exemplars
|
Evidence for Typicality: Hedges
|
-Hedge in language = rather than completely committing to a specific idea, you're hedging (i.e. "Yeah, it's kind of true")
-i.e. More likely to hear "A whale is technically a mammal" than "A dog is technically a mammal" since a whale isn't a prototypical member of the category
-Such lin…
|
Exemplar Theories
|
-Exemplar: example of a category
-Store away in memory a bunch of examples
-Multiple exemplars are stored in memory (i.e. for category "dog," there are a whole bunch of exemplar dogs stored away)
-Categorize new things based on similarity to stored exemplars
|
Prototype Theories
|
-Not judging against a whole bunch of individual examples, but against an ideal/average (based on experience with members of a category, you create one singular representation)
-Prototype: best, ideal, or average example
-Only a 'prototype' is stored in memory
-Categorize based on simi…
|
Geometric Approach to Similarity
|
-Concepts may lie in a geometric space (represent concepts stored in semantic memory in a geometric space)
-Things that are similar are close together, things that are dissimilar are far apart
-All you have to do to judge similarity is compute how far apart concepts are in this space
-…
|
Geometric Similarity: Ratings
|
-One way to determine geometric similarity is to start with subjective ratings
-i.e. On a scale of 1-6, how similar is an apple to a plum?
-These ratings can be used to place ratings in a geometric space (using "multi-dimensional scaling")
|
Metric Axioms
|
-If concepts are really represented in geometric space, similarities should satisfy certain properties (axioms) of geometric space
-These axioms are not satisfied, which falsified the geometric theory of similarity judgment
|
Metric Axioms: Minimality
|
-The dissimilarity between a concept and itself must always be the smallest possible
-But a highly familiar concept is rated as more similar to itself than one that is less familiar
-i.e. Apple-Apple more similar than Pomegranate-Pomegranate (violates minimality because distances should…
|
Metric Axioms: Symmetry
|
-Distance from A to B has to equal distance from B to A; the similarity between concepts must be the same regardless of direction
-i.e. How similar is an apple to a plum v. a plum to an apple?
-People give asymmetric ratings based on direction
-Experiments have found that: an unfamilia…
|
Metric Axioms: Triangle Inequality
|
-Hypotenuse cannot be longer than the sum of its two sides
-If one concept is similar to a second concept, and the second concept is similar to the third concept, then the first and the third must be relatively similar
-i.e. Jamaica is similar to Cuba. Cuba is similar to China. But Jama…
|
Tversky's Featural Approach to Similarity
|
-People give similarity ratings inconsistent with geometric space
-Feature-based similarity approaches do not require metric axioms
-Feature-based approaches look at features in common and features that differ (more shared features, more similarity)
-i.e. Lemon (yellow, oval, trees etc…
|
Tversky's Contrast Model
|
-Similarity (L,O) = a*f(shared) - b*(L but not O) - c*(O but not L)
-a, b, and c are weights; people may weight these terms differently
-Weights don't have to be the same (i.e. don't have to put the same weight on features unique to stimulus as features unique to memory -> helps predict…
|
Tversky's Featural Approach: Explanation for Minimality
|
-Things you know well have more features, so the a(shared) part of the equation will be higher for familiar things than unfamiliar
-Final two terms in the equation or gone, only the first one (how many features are shared between the item and itself) matters
-i.e. Can think of six or se…
|
Tversky's Featural Approach: Explanation for Symmetry
|
-(b) and (c ) weights can be different, so the order of the comparison makes a difference
-b = unique features of first concept, c= unique features of second concept -> will get different ratings if you reverse the order
|
Tversky's Featural Approach: Explanation for Triangle Inequality
|
-Two concepts can be similar to a third for different reasons, but have nothing in common themselves
-i.e. Jamaica & Cuba similar due to location, China & Cuba similar due to politics; Jamaica & China have nothing in common
-Can explain by assuming you're judging things across multiple …
|
Tversky's Featural Approach: Typicality Explanation
|
-Typicality ratings reflect similarity of concept to category
-i.e. Birds have these characteristic features: flies, sings, lays eggs, etc.
-Apply model -> What features does robin have? flies, sings, etc. -> all features shared with category of bird
-Chicken: doesn't fly, doesn't sing…
|
TLC Model (Collins & Quillian)
|
-Semantic network model
-Each node corresponds to a concept; each concept associated with corresponding features (i.e. birds have feathers)
-Concepts are related to one another -> in original TLC model, they're arranged in a strict hierarchy
-Links are "is a" links and mean that one co…
|
Power of TLC Model
|
-Allows system to make inferences based on connections
-i.e. Canary is a bird, bird has wings -> canary has wings
-Inheritance: canary inherits the properties of its parent (bird)
-Cognitive economy: don't have to represent every feature with every concept; can be economic in your repr…
|
TLC Model: Sentence Verification Task
|
-i.e. A canary is a bird: Look at features (don't see "is a bird" listed as a feature) -> go to "is a" link -> find out that a canary is a bird
-i.e. A salmon is a bird -> a salmon is a fish, which is an animal, bird is another type of animal -> falsify
-i.e. A salmon is blue -> find fe…
|
TLC Model: Distance Effects
|
-More links, more time
-Natural prediction: time it takes to verify should be based on how many links you have to follow
-i.e. "A canary can sing" v. "can fly" v. "has skin" -> sing is right next to category, fly is one level up, skin is 2 -> sing should be fastest, skin slowest
-Same …
|
Problems with TLC: Reverse Distance Effect
|
-People are usually faster to verify "a dog is an animal" than a dog is a mammal"
-According to original TLC, a mammal is a type of animal and a dog is a type of mammal -> dog to mammal is one link, but dog to animal is two
-Verifying dog is an animal should be slower than dog is a mamm…
|
Problems with TLC: Typicality Effects
|
-i.e. Robin might be more typical than chicken
-Can see this in reaction times
-Faster to verify that a robin is a bird than a chicken; faster to verify that tennis is a sport than curling
-Hard to explain with TLC model: a robin is a bird and a chicken is a bird, so both have one "is …
|
Problems with TLC: Basic Level Objects
|
-When asked to (1) name features or (2) say what features objects have in common, it's easy with basic-level (i.e. chairs), hard with other levels (i.e. furniture)
-People will name an object by the basic level term (i.e. will say "that is a jacket" not "that is clothing")
-Children lea…
|
Revised TLC
|
-Still has nodes that correspond to categories and links that correspond to associations between nodes
-Not strictly hierarchical anymore
-Not all links are the same: some links are short (strong association), others are long (weaker association)
-Links are also labeled: provides infor…
|
Revised TLC: Spreading Activation
|
-A node is activated when a person sees, reads, hears, thinks about a concept
-Activation spreads to adjacent nodes
-Spread of activation permits sentence verification
-When activation intersects, decide whether relationship makes statement true
-i.e. In sentence verification, activat…
|
Revised TLC Explanation for Reverse Distance
|
-i.e. Semantic network with mammal, animal, dog, etc.
-Lengths vary: animal and dog have short link, mammal and dog have longer link
-Faster to say that a dog is an animal than mammal
-Not strictly hierarchical, shorter link between dog and animal than dog and mammal
-Spreading activa…
|
Revised TLC: Explanation for Typicality
|
-i.e. Which is more typical, a dog or a lion?
-Most people would say a dog -> can be explained because the association between dog and animal is shorter than lion and animal
-Would take longer for activation to intersect with lion than dog
|
What does the revised TLC model still not explain?
|
-Basic level effects
-Cannot look at revised TLC model and say "dog" and "cat" are special (basic level), but mammal is not
-A node is a node is a node in the model
|
How does the spreading activation model account for priming?
|
-Lexical decision task
-People are faster to say that doctor and nurse are words than they are to say that butter and nurse are
-Semantic network with/strong associations between "doctor" and "nurse," but no association between "butter" and "nurse"
-Activate "doctor" -> start spreading…
|
Schema
|
-Mental framework of knowledge that encompasses a number of interrelated concepts
|
Theory-Based View of Meaning
|
-People understand and categorize concepts in terms of implicit theories, or general ideas they have regarding those concepts
-Suggests people can distinguish between essential and incidental/accidental features because they have complex mental representations of these concepts
-Evidenc…
|
Essentialism
|
-Certain categories, such as those of "lion" or "female," have an underlying reality that cannot be observed directly
-Even young children look beyond obvious features to understand the "essential" nature of something (by age 4 can use abstract categories as opposed to simply perceptual …
|
Models Based on Comparing Semantic Features
|
-Features of different concepts are compared directly rather than serving as the basis for forming a category
-i.e. Mammals can be represented in terms of a psychological space organized by three features: size, ferocity, and humanness
|
Parallel Processing/Connectionist Model
|
-Very large numbers of cognitive operations go on at once through a network distributed across incalculable numbers of locations in the brain
-Network comprises neuron-like structures (nodes do not in and of themselves represent knowledge, the pattern of connections does)
-No single uni…
|
Activation in the PDP Model
|
-Connections between units can possess varying degrees of potential excitation or inhibition
-These differences can occur even when the connections are currently inactive
-The more often a particular connection is activated, the greater is the strength of the connection, whether the con…
|
Knowledge in the PDP Model
|
-When we use knowledge, we change representation of it -> knowledge is a process, not a final product
-What is stored is a pattern of potential excitatory or inhibitory connection strengths that the brain uses to re-create other patterns when stimulated to do so
-When we receive new inf…
|
Modular
|
-Divided into discrete modules that operate more or less independently of each other
-Each independently functioning module can process only one kind of input, such as language (i.e. words) or visual percepts (i.e. faces)
-i.e. Different brain areas active when recognizing faces than wh…
|