PSY 3711: MIDTERM 2
77 Cards in this Set
Front | Back |
---|---|
Are interests a preference for particular work activities or outcomes?
|
Activities
|
Are values a preference for particular work activities or outcomes?
|
Outcomes
|
What are the 6 dimensions of structure of interests?
|
Investigative
Artistic
Social
Enterprising
Realistic
Conventional
|
What are the 4 underlying dimensions of RIASEC
|
Data
Ideas
People
Things
|
Are interests dimensions or types
|
Dimensions
|
Are opposite interests on the RIASEC correlated or uncorrelated
|
Uncorrelated
|
Work values
|
individual's characteristic pattern of preferences for certain work outcomes, goals, or objectives
|
Is the follow question asking about complementary or supplementary environment fit: does the environment meet the needs of the person?
|
Complementary
|
Is the follow question asking about complementary or supplementary environment fit: Do the person and environment share the same characteristics?
|
supplementary
|
What are two measures for assessing the environment?
|
People based measures
Ratings
|
What interest measure predicts performance across all jobs
|
None
|
What is connected to performance as the major component of person-organization fit?
|
Values
|
Does the following question assess breadth or fidelity: how much of a job is captured by the assessment?
|
Breadth
|
Does the following question assess breadth or fidelity: How realistic are the materials?
|
Fidelity
|
What is required for a simulation to be valid?
|
Accurate behavioral responses
|
Are AC predictors usually combined clinically or mechanically?
|
Clinically
|
Are panel interviews more or less valid than structured interviews?
|
Less valid
But more reliable
|
What is the relevant reliability for interviews?
|
interrater
|
Unstructured interviews have less construct validity for what dimensions?
|
Social skills and job experience
|
Do structured or unstructured interviews have stronger incremental validity?
|
Structured
|
What are unstructured interviews better for?
|
Recruiting applicants
Choosing among a few equally good candidates
|
What is the drawback of empirical keying for biodata?
|
Some items will/won't correlate just by chance
|
What is the draw back of factor analytic for biodata
|
Items that group together are not necessarily good predictors
|
How can you improve factor analytic for biodata
|
Follow up factor analysis with other research
|
How do you improve empirical keying for biodata
|
Conduct multiple studies
|
How can you improve references?
|
Standardized
use peer ratings
|
What is the following an example of: a female applicant is asked if she has any domestic responsibilities that might interfere with her work, but a male applicant is not asked that question
|
adverse/disparate treatment
|
What is the difference between adverse treatment and adverse impact?
|
Adverse treatment is intentional discrimination
Adverse impact may not be intended discrimination
|
How is adverse impact shown?
|
Statistical disparities between a majority and a minority group in terms of outcomes
|
is the burden on the plaintiff or the defendant in an adverse impact case
|
plaintiff
|
What are the two things that a plaintiff has to show in an adverse impact case
|
1. belongs to a protected group
2. members of the protected group were statistically disadvantaged compared to the majority
|
What is the 80% rule used for
|
Adverse impact
|
a staffing model needs to be...
|
comprehensive
|
Why are compensatory selection systems important?
|
In most instances, humans are able to compensate for a relative weakness in one attribute through a strength in another one
|
What are the two basic ways to combine information in making a staffing decision
|
clinical and statistical
|
Is the hurdle system compensatory or noncompensatory?
|
non compensatory because the candidate could not continue unless the hurdle was cleared
|
cross validation
|
testing a multiple regression on a second sample to see if it still fits well
|
What are letter grades assigned to a score an example of?
|
Score banding
|
Standard error of measurement
|
amount of error in a test score distribution
|
What can we conclude if the difference between two candidates is less than the standard error of measurement?
|
candidates are not really different
|
Is subgroup norming legal or illegal
|
illegal
|
Are clinical or statistical methods preferable for layoffs?
|
statistical
|
Are mechanical or clinical predictions more accurate?
|
Mechanical
|
Is the following and example of mechanical, clinical, both, or synthesis of collection: personality or cognitive ability test score
|
mechanical
|
Is the following and example of mechanical, clinical, both, or synthesis of collection: expert rating of interview or simulation
|
clinical
|
Is the following and example of mechanical, clinical, both, or synthesis of collection: cognitive ability test scores and interview ratings
|
both
|
Is the following and example of mechanical, clinical, both, or synthesis of collection: take a prediction based on clinical judgment and combine it in a mechanical judgment
|
synthesis
|
Is the following and example of mechanical, clinical, both, or synthesis of collection: take a prediction based on mechanical combination and use it to inform a clinical judgement
|
synthesis
|
Is this a mechanical or clinical synthesis: clinical rating of the expert is treated as an additional predictor that is mechanically combined with all the other info to produce a final rating
|
mechanical synthesis
|
Is this a mechanical or clinical synthesis: predictor scores are combined using an equation based method to create a final composite score. The score is given to the expert and the expert combines all the info to their final rating
|
clinical synthesis
|
Can people do a good job of collecting data?
|
Yes
|
Can people do a good job of combining data?
|
No
|
Is clinical synthesis better than methods that are only equation based?
|
No
|
What is the most effective method for data combination?
|
mechanical
|
Why does clinical combination of data perform so poorly?
|
People are inconsistent about how they apple rules to making judgments
Can by overly swayed by unusual/unimportant info
develop incorrect rules for making judgments
people have nice set of lawfully flawed decision making biases
|
What are the following an example of: optimal weights, meta analytic weights, research literature based weights
|
criterion weighting
|
What are the following an example of: criterion weighting, expert judgment based weights, bootstrapped weights
|
Differential weights
|
How do you get an optimal weight?
|
YOU CAN'T BITCHHHHH
|
What are the disadvantages of literature based weights
|
literature isn't as well organized as a good meta analysis
results are likely to be less stable than a meta analysis
|
What are the advantages of using literature based weights
|
Useful if you're using predictors that have not been examined in a metaanalysis
Weights based on solid empirical information
Can use weights right away without waiting for a primary study
|
What is the problem with expert weights?
|
They are only as good as the experts we ask
|
Which is better: bootstrapped weights or the clinical judgments from which they are derived?
|
Bootstrapped
|
Are unit weights or differential weights better when predictors are of similar strength?
|
Unit weights
|
Are unit weights or differential weights better when predictors are very different?
|
Differential weights
|
What is the difference between fairness and bias?
|
Fairness is a subjective judgment referring to societal values about whether or not a decision is fair
Bias is a technical concept concerned with whether a predictor score misrepresents a person's likelihood of effective future performance
|
In regards to bias, what is a major problem in using cognitive ability tests for selection?
|
There are subgroup differences in scores
e.g. white mean score 1.0 higher htan black
|
What is the diversity-validity dilemma?
|
White-minority differences in cognitive ability scores make it impossible for organizations to both maximize the validity of their selection procedures and hire a diverse workforce
|
What subgroup differences are seen across sex?
|
Mostly none
except psychomotor ability
|
What subgroup differences can be found across racial groups
|
General mental ability
personality (but this is trivial)
|
What are three ways to evaluate bias?
|
mean score differences
differential validity across groups
differential prediction
|
What method for evaluating bias is the following: one group has a higher mean than another
|
mean score differences
|
What method for evaluating bias is the following: the predictor for whites is r=.30 but the predictor for blacks is r=.25
|
differential validity
|
What method for evaluating bias is the following: the intercept of the regression line for whites is higher than for blacks
|
Differential prediction
|
What is the limitation of differential validity
|
there is evidence of small differential validity across groups but no evidence of differential prediction
|
What do we see differences in validity across groups, but no evidence of differential prediction?
|
Differential prediction focuses on differences in unstandardized regression slopes and intercepts
|
What is the most used and legally defensible model of differential prediction
|
predictor oriented differential prediction
|
What happens if the a regression line for whites and regression line for blacks cross?
|
There is a significant interaction term between races
|