DOC PREVIEW
MSU PSY 255 - Performance Appraisal

This preview shows page 1-2 out of 5 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

PSY 255 1nd Edition Lecture 6Outline of Last Lecture I. Criterion MeasurementOutline of Current Lecture II. Performance AppraisalIII. HistoryIV. Format ResearchV. Rater Error and Rater AccuracyCurrent LecturePerformance Appraisal: Systematic observation and evaluation of job performance, and provision of feedback– Part of performance management• Along with goal setting, continuous coaching, developmental planning– Uses are:• Personnel decisions (promoting, firing, etc.)• Developmental (coaching, training)• Documentation (defense against litigation)• Ineffective systems have disastrous consequences– Wrong people are promoted and fired– Employees feel unfairly treated and act accordingly– Legal problemsHistory of Performance Appraisal• Age of accuracy– Rating formatThese notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.– Rater errors and biases– Rater training• Social context of PA and fairness– Rater and ratee reactions– Multisource feedback– Participation– Format Research• Premise: good rating form will improve rater accuracy and precision• Graphic rating scales– Oldest format and very common– Rater judges ratee’s standing on different traits/dimensions using scale with numerical/verbal anchorsFormat Research• Behaviorally anchored rating scales (BARS)– Graphic rating scale + behavioral descriptions (critical incidents) as anchors• Developing BARS – 5 steps– Group 1 SMEs identify important job performance dimensions– Group 2 SMEs generate high-, medium-, low-effectiveness critical incidents for those dimensions– Group 3 SMEs re-sort critical incidents into appropriate dimensions (retranslation)– Group 4 SMEs rate remaining critical incidents on effectiveness– Multiple incidents are chosen for each dimension and BARS are developed• Checklists– Rater checks off each behavior that employees exhibit– Checked items are summed for overall score• Weighting issues– Forced-choice checklists• Check one behavior among group of 2-4• Employee comparisons– Rank-ordering from best to worst– Paired comparisons– Forced distributionRater Errors• Common rater errors– Errors can be intentional (rater motivation) or unintentional (info processing biases, dispositional)– Halo• Using global evaluations to make dimension-specific ratings (or unwilling to discriminate between specific dimensions)• True halo• Common rater errors– Recency effect• Heavily relying on most recent info– Primacy effect• First impression bias– Performance cue effect• Ratings biased by knowledge of prior performance– Similar-to-me bias• Higher ratings to similar others• Distributional errors• Problem: can’t discriminate between in/effective employees– Leniency• Mean of one rater’s ratings is higher than mean of all ratees across all raters OR• Mean rating is higher than scale midpoint– Severity• Mean of one rater’s ratings is lower than mean of all ratees across all raters OR• Mean rating is lower than scale midpoint– Central tendency• All ratees rated as average• Rater error training (RET)– Teaches raters about errors and biases• …BUT reducing error ≠ improve accuracy and can even reduce accuracy• Rater variability training (RVT)– Teaches differences between ratees• Frame-of-reference training (FOR)– Teaches differences between ratings• Behavior observation training (BOT)– Teaches how to detect, perceive, recall information– Combined with FOR, may increase accuracyRater Accuracy– Absence of bias ≠ greater accuracy– Moderators of accuracy• Accountability• Purpose (developmental vs. evaluative)• Culture (nonpolitical,


View Full Document
Download Performance Appraisal
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Performance Appraisal and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Performance Appraisal 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?