Front Back
the independent variable
the predictor; the manipulated variable; the antecedent condition
null hypothesis
strive to disprove; states that the alternative hypothesis is false. ex) _____ has no effect on ______.
alternative hypothesis
what you predict to occur
a set of interrelated concepts that present a systematic view of phenomenon
Theory
steps of the scientific method
1. Define research Q 2. Form hypothesis 3. test hypothesis by gathering data 4. analyze data 5. interpret data/draw conclusions 6. Publish results 7. Retest/replicate
meso-research
the study of the interaction between individual and collective behavior
macro-research
the study of collective behavior "collective" meaning a certain amount of agreement among the ppl
what is the model used to train I/O psychologists?
scientist-practitioner model
top-down measurement development
developing items based on theory
bottom-up measurement development
developing items based on data (from focus groups, pilot testing [try w a sample n see if works] go to a source ask Qs)
reliability
measuring stability and consistency
Reliability Rule of Thumb?
(rxx) >/= .70
observational design
the researcher obesrves employee behavior and systematically records what's observed nonexperimental
survey design
research strategy in which participants are asked to complete a questionnaire or survey nonexperimental
experimental design
participants are randomly assigned to different conditions lab - provide excellent methods of control, likely to lead to causality field - difficult to examine cause/effect relationships
non-experimental design
doesn't include any "treatment" or assignment to different conditions survey or observational
quasi-experimental design
participants are assigned to different conditions, but random assignment is not possible
the dependent variable
the subsequent behavior of the research participant; "the effect"
quantitative methods
tests, rating scales, questionnaires, and physiological measures and yield results in numerical results. - preferred over qualitative
qualitative methods
produce flow diagrams and narrative descriptions of events/processes rather than "numbers" as measures - observations, interviews, case studies, and analysis of diaries/written documents - better to understand/identify the context of the behavior in question, while quantitative meth…
experimental control
characteristic of research in which possible confounding influences that might make results less reliable or harder to interpret are elimated; easier to do in lab than field studies
statistical control
using statistical techniques to control for the influence of certain variables. such control allows researchers to concentrate exclusively on the primary relationsihps of interest
descriptive stats
stats that summarize, organize, and describe a sample of data measures of central tendancy, skew, and variability
variability
the extent to which scores in a distribution vary
inferential stats
stats used to aid the researcher in testing hypotheses and making inferences from sample data to a larger sample or population - t test, F test, or chi-square test
statistical significance
p < .05 addresses the confidence that we can have that a result is not due to chance; the lower it is, the less probability it would occur again out of chance, the more confident we are
statistical power
the likelihood of finding a statistically significant difference when a true difference exists... the smaller the sample size, the lower the power to detect a true difference btwn groups or the effect of an independent variable on a dependent when one really exists provides warning a…
measurement
assigning a number to objects or characteristics of individuals
multiple correlation coefficient
stat that represents the overall linear association between SEVERAL (cog ability, personality, experience) variables on one hand and a SINGLE (job performance) variable on the other
meta-analysis
stat method for combining and analyzing results from many studies to draw a general conclusion about relationships among variables
statistical artifacts
characteristics of a study that may distort results. they are: - sample size - range restriction - reliability
validity
addresses whether a measure accurately and completely represents what was intended to be measured; the accuracy of inferences made based on test or performance data
a type of reliability calculated by correlating measurements taken at time 1 and then meas. at time 2
test-retest reliability measures consistency over time AKA temporal consistency
type of reliability calculated by correlating meas. from a sample of ppl who complete 2 diff forms of the same test
Equivalent Forms Reliability ex) SAT
form of reliability that assesses how consistent the items of a test measure a single construct (stress)
Internal consistency reliability ex) split test by even/odd & correlate correct answers estimated using Cronbach's alpha
the extent to which scores/ratings of something/someone across multiple ppl are stable
inter-rater reliability
correlating a test score w a performance measure, attitude, or behavior
criterion-related validity important for selecting employees
the extent to which a measure represents all facets of a social construct
content validity
a concept or characteristic intended to measure
construct
the extent that a test measures the intended construct
construct validity ---> convergent validity correlates with other tests that measure the same thing ---> divergent validity doesn't correlate highly to something that should be different
face validity
Face validity is the extent to which a test is subjectively viewed as covering the concept it purports to measure.
"g"
general mental ability - the capacity to reason, learn, and solve probs in a variety of ways ** one of the best predictors of broad success in education and work
RIASEC Model
Realistic interests (FF, police, farmer) Investigative Artistic Social Enterprising Conventional
biodata
factual kinds of Qs about self, life, experiences Good: historical, objective, verifiable, = access, job relevant Bad: hypothetical, subjective, non-verifiable, non-relevant
situational judgment tests
presents situation, asks what you would do - good evidence for validity and accpeting job candidates
asks Qs directly about theft and other past honesty behaviors
overt integrity test
test that measures counter-productive behaviors like general delinquency, impulse control, and concientiousness
personality - oriented integrity test
asks applicants to demonstrate work behavior under realistic conditions
work samples/simulations
actions/behaviors relevant to an organizations goals
performance;
the value in terms of increased validity of adding a particular predictor to an existing selection system
incremental validity
when an actual criterion is missing info that is part of the behavior trying to be measured
criterion deficiency; theoretical criterion
when actual criterion includes info unrelated to the behavior one is trying to measure
criterion contamination; actual criterion
most common CWB behaviors
absenteeism, sabotage, and dishonesty
research predicts "g" better predicts ___ whereas personality better predicts _____
g = task performance personality = contextual performance
proficiency at performing activities that are formally part of the job
task performance
proficiency at performing activities not typically part of job, but support other aspects of the environ
contextual performance
in stating that hardiness goes aboce and beyond grit in the prediction of GPA, we're asserting hardiness has:
incremental validity
process of defining jobs in terms of component tasks and the knowledge and skills required to perform them
job analysis
task oriented job analysis
state actual tasks and what is accomplished by them pro: easier to distinguish among jobs and equipment
worker oriented job analysis
state attributes of worker needed to accomplish tasks pro: useful for thinking across organizations or types of jobs
what are the KSAOs
Knowledge Skill Ability Other characteristics
traditional ways of job analysis
observe, interview, collect "critical incidents" and work diaries, & give questionnaires and surveys
newer ways of job analysis
electronic performance monitoring, email monitoring research cons: might improve performance of aspects monitored & dec performance in other areas; reduce job satisfaction, inc stress levels
combining the set of variables to see how they're related to a single variable is a
multiple correlation coefficient
if a students test score is positively related to whether they pass or fail the course means that her test has
criterion-related validity
rule of thumb for an acceptable reliability coefficient?
(rxx) >/= .70
evaluation of the results of performance
effectiveness
behaviors relevant to the organizations goals measured by each individual's proficiency
performance
the rato of effectiveness (output) to the cost of achieving that level of effectiveness (input)
productivity
3 determinants of job performance
- declarative knowledge (DK) - understanding whats req to perform a task - procedural knowledge and skill (PKS) - knowing how to perform task - motivation (M) - conditions responsible for variations in intensity, persistance, quality, and direction of ongoing behavior
typical vs. maximum performance
typical = 70% effort for 8 hours maximum = 100% for 4-8 hours
altruism
helping an individual or group in the organization
task performance
proficiency in formally recognized job tasks
how do OCBs and CWBs interrelated?
a weak, negative correlation
contextual performance
supports core of job; not required, but goes above and beyond
adaptive performance
not listed on job description but required when necessary
who makes performance meas ratings?
usually the supervisors but 360 feedback is trending
360 feedback rating practices
- ensure anonymity - supervisor ad ratee get tg to decide on rater - useful for development and growth not for administrative decisions - train - allow followups
agreement across sources in 360 feedback ratings is generally low.
TRUE
rater errors
1. observe --> miss impt behaviors from motivated observation 2. encode --> incorrectly label info from insufficient attn 3. store --> store wrong info 4. retrieve --> implicit theories (affect/context- dependent recall) 5. integrate info --> liking effects, bias
rating errors
central tendency error halo error leniency-severity error
central tendency error
tendency to rate employees towards the middle of the scale (playing it safe)
leniency-severity error
tendency to rate too leniently or too harshly
halo error
tendency to rate an employee in a consistent way on all dimensions based on one perception
psychometric training
informing rater of common rating distortions *** reduced error but doesn't help accuracy
frame-of-reference training
provides info about multidimensional nature of performance, makes sure rater understands scale, practices rating standard performance, and gives feedback to raters on practice **very useful
when giving feedback focus on
behavior not personal characteristics

Access the best Study Guides, Lecture Notes and Practice Exams

Login

Join to view and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?