DOC PREVIEW
MSU PSY 255 - Exam 2 Study Guide

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

PSY 255 1nd EditionExam # 2 Study Guide Lectures: 8 - 15Lecture 8 (February 17) - Performance AppraisalSelection and Tests• In a perfect world, we hire based on performance criteria scores, but we don’t have them so we use predictors as proxies for criteria• Test– Systematic procedure for observing behavior and describing it quantitatively– Types of tests• Speed vs. Power• Individual vs. Group• Paper ‘n’ pencil vs. Performance• Speed Test: Contains relatively easy items and test taker must answer as many as possible in given time• Power Test: More difficult items and no time constraints• Individual Test: Administered to one person at a time• Group Test: Multiple people tested at a time• Paper ‘n’ pencil: Test takers respond to questions on paper or computer• Performance Test: Involves manipulation of object or piece of equipment• A good test is valid, reliable, practical and fair Types of Predictors• General cognitive ability tests (CATs)– Believed to be important for most jobs– Most common selection measure– Began with Army Alpha and Beta– Problems• Racial differences• Related to years of formal schooling (questions validity)• Stereotype threat• Specific cognitive ability tests (CATs)– Tests specific abilities that are required on the job (vs. general capacity to learn)• Mechanical (Bennet Mechanical Test)• Spatial (Space Relations Test)• Clerical (Minnesota Clerical Test)• Psychomotor tests– Assess speed and accuracy of motor and sensory coordination• Gross and fine motor movement, vision, hearing, etc.• Personality tests– Measure predispositions to behave in certain ways across situations– Big 5: Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism (OCEAN)• Specific dimension or facets under each category• Personality tests– Show less discrimination against minorities– Problems with people faking their answers• Integrity tests– Predict propensity to engage in counterproductive work behaviors (CWBs)• Ex: theft, cheating, sabotage– “Social conscientiousness”• Integrity tests– Two types• Overt: measures attitudes toward CWBs and self-reported CWBs• Personality-type integrity tests: CWB-related personality characteristics are measured (ex: risk taking, dishonesty, emotional instability)Lecture 9 (February 19) - Performance Appraisal Continued- Work samples– Replicates work done on the job. They are performance test that assesses criteria directly– Pretty common and easy to do for hands-on jobs (factory work, engineering, computer programming, etc.)• Assessment centers– Multiple raters assess multiple ratees on multiple dimensions using multiple exercises– Usually takes 2-3 days– 50% of major companies use them, especially when filling upper management positions– Began in Germany in 1930s and further developed by U.S. during WWII• Popular assessment center exercises– In-basket• Applicants complete series of job-related scenarios (make decisions, respond to memos & grievances, create schedules, write letters, etc.)• Management simulation– Leaderless group discussion• Small group of applicants are given an issue to resolve– Assigned roles vs. no roles• Rated on social dimensions (aggression, persuasion, listening, flexible thinking, etc.)• Biographical info– “Past behavior is the best predictor of future behavior”– Application blank• Very common to inquire about education, work experience, hobbies, etc.• This is used and abused––must be supported by job analysis– Biodata• Biographical information blanks (BIBs)• Tend to include lots of multiple choice or Likert items asking broad questions about health, family, interests, social experiences, etc.• Interviews– Procedure designed to predict future performance based on oral responses to oral questions– Most popular, but not valid• Letters of reference/recommendation– Used to be simple • Former employers have been sued for recommending poor employees• Employees sue for libel for poor recommendations– No requirement to give recommendation• Organizations just give the factsLecture 10 (February 24) – SelectionRecruitment- Attracting and encouraging application for employmento Sets limit on quality of applicants that can be selectedo Good for reducing adverse impact and diversifying the workforceo Realistic job previews (RJPs) o Formal (newspaper ads, campus visits, internet, on-site) vs. informal (word-of-mouth)- Selection is sometimes used to discourage and reduce the number of applicants!Selection- Begins with job analysiso Job specifications tell us what KSAs to select ono We can then develop or use off-the-shelf measures of those KSAs- Selection batteryo Set of predictors and tests o Yields better predictions vs. using only one predictor- Criterion-related validityo Establishes strength of relationship between predictors and job performance Quantitative method Gives validity coefficiento Two designs: Predictive Concurrent- Predictive validityo Longitudinal design where predictor data collected at T1 and criteria at T2o Stepso Job analysis tells us the required KSAo Choose predictors that measure KSAso Administer predictors to job applicantso Hire applicants based on different info other than predictors we are trying to validate - Concurrent validityo Predictor and criteria data collected simultaneously from incumbents- Cross validationo Validity shrinkage: validity of predictors will probably be lower for different samples of applicants/incumbentso Thus, cross validate to assess amount of shrinkage and double-check the validityo Essentially, we need to compare how much the coefficient of determination changes from sample 1 (R21) to sample 2 (R22)Lecture 11 (February 26) – Selection Continued Selecting Applicants- Multiple cutoffso Cutoffs (passing scores) are established for each predictor Set subjectively (SMEs) or statistically (contrast, borderline)o Non-compensatory Must meet/exceed each cutoff (overall performance on selection battery irrelevant) Requires minimal competence in all areas- Multiple hurdleso Variant of multiple cutoffs Also non-compensatoryo Applicants are administered individual predictors in predetermined ordero Cost effective Selecting Applicants- Multiple regression (MR)o Statistical technique that forecasts criteria using one or more predictor scores


View Full Document
Download Exam 2 Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Exam 2 Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Exam 2 Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?