Unformatted text preview:

Psych 2300 Review Chapter 9: Experiments: How do you know if a study is an experiment? manipulation of at least one variable and measured another. independent variable is the variable that is manipulated the measured variable is called the dependent variable control variables: any variable that an experimenter holds constant on purpose criteria for causal statements covariance: the proposed causal variable must vary systematically with changes in the proposed outcome variable temporal precedence: the proposed causal variable comes first in time, before the proposed outcome variable internal validity: the ability to rule out alternative explanations for a causal relationship between two variables Threats to internal validity: design confounds: second variable that happens to vary systematically along with the intended independent variable and therefore is an alternative explanation. selection effects: kinds of participants at one level of the independent variable are systematically different from the kinds of participants at the other level of the independent variable. (random assignment, matchedgroups design) order effects: being exposed to one condition changes how people react to a later condition. systematic variability: the situation that occurs when the levels of a variable coincide in some predictable way with experimental group membership, creating a potential confound. (one level) unsystematic variability: the levels of a variable occurring independently of experimental group membership, contributing to variability within groups. (random) (all levels) Simple experiments: IndependentGroups Designs posttest only: participants are randomly assigned to independent variable groups and are tested on the dependent variable once. advantages: participants won’t have an idea of the researcher’s hypothesis disadvantages: no comparison pretest/posttest: participants are randomly assigned to at least two groups and are tested on the key dependent variable twice once before and once after exposure to the independent variable. advantages: evaluate if random assignments made the groups equal, track groups over time disadvantages: suspicious to hypothesis Simple experiments: WithinGroups Design concurrentmeasures design: participants are exposed to all the levels of an independent variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent measure. repeatedmeasures design: participants are measured on a dependent variable more than once that is, after exposure to each level of the independent variable. have the potential for order effects, threatens internal validity Withingroups design vs. Independent groups Withingroups: advantages: ensures participants in the two treatment groups will be equivalent (same people), gives researchers more power (ability to if a sample to show a statistically significant result when something is truly going on in the population) to notice differences between conditions, generally require fewer participants overall disadvantages: people see all levels of the independent variable and then change how they would normally act, demand characteristics: an experiment that lead participants to guess its hypothesis, might be impossible Independentgroups: advantages: disadvantages: require more participants Threat to Internal Validity in Withingroups design: order effects (carryover effects, practice effects): participants’ performance at later levels of the independent variable might be caused not by the experimental manipulation but rather by the sequence in which the conditions were experienced counterbalancing: present the levels of the independent variable to participants in different order partial counterbalancing: only some of the possible condition orders are represented randomize order to which something is presented latin square: a formal system of partial counterbalancing that ensures that each condition appears in each position at least once. Is the pretest/posttest design a withingroups design? participants not exposed to all levels of a meaningful independent variable Four Validities of Causal Claims: Construct Validity: how well were they measured? how well were they manipulated? manipulation checks: collect empirical data on the construct validity of their independent variables an extra dependent variable that researchers insert into an experiment to help them quantify how well an experimental manipulation worked pilot study: simple study, using a separate group of participants, that is completed before the study of primary interest is conducted ask what evidence exists to determine that these manipulations and measures actually represent the intended constructs in the theory External Validity: To whom or to what can you generalize the causal claim how were the participants recruited, random sampling? random assignment? generalize to people and situations not as important as internal validity for causal statements Statistical Validity: How well do your date support your causal conclusion? statistically significant: unlikely to have been obtained by chance from a population in which nothing is happening suggests that covariance exists between the variables effect size: strength of covariance  (d) how far apart two experimental groups are on the dependent variable, and how much scores within groups overlap larger the effect size, the more important, and the stronger, the causal effect probably is. An effect size in which d= Can be described as Is comparable to an r of 0.20 weak/small 0.10 0.50 moderate/medium 0.30 0.80 strong/large 0.50 Internal Validity: PRIORITY Are there alternative explanations for the outcome? 1. Did the design of the experiment ensure that there were no design confounds? Or did some other variable accidentally covary along with the intended independent variable? 2. If the experimenters used an independentgroups design, did they control for selection effects by using random assignment or matching? 3. If the experimenters used a withingroups design, did they control for order effects by counterbalancing? Chapter 10: Internal validity is the most important one to focus on for an experiment. design confounds selection effects order effects Onegroup, pretest/posttest design (the really bad experiment): a study in which a researcher

View Full Document

OSU PSYCH 2300 - Psych 2300 Review

Download Psych 2300 Review
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...

Join to view Psych 2300 Review and access 3M+ class-specific study document.

We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Psych 2300 Review 2 2 and access 3M+ class-specific study document.


By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?