DOC PREVIEW
U of M PSY 3711 - Performance Appraisal

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

PSY 3711 1st Edition Lecture 18Outline of Last Lecture I. Humans behind the machineII. Meta analytics weightsIII. Expert WeightsIV. Bootstrapped weightsV. Which is best?Outline of Current LectureVI. Why should we care about performance appraisalVII. Performance appraisal substanceVIII. Performance appraisal processCurrent LectureI. Why should we care about performance appraisala. Appraisal the numerous purposes of appraisalb. Criterion datac. Compensationd. Layofe. Goal settingf. FeedbackII. Performance appraisal substancea. Types of performance appraisali. Trait basedii. Behavior/task basediii. Results basedb. Typical performance vs maximal performancei. Typical performance: performance due to ability and average motivation over timeii. Maximal performance: performance due to bursts of efort right nowiii. Reflects  awareness that performance is being observed, accept goal of maximizing performance, short enough time to maintain maximum efort throughoutc. Sourcesi. Supervisors (most common; see maximal)ii. Peers (see typical)iii. Self These notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.iv. Subordinatev. Customersvi. Consultantsvii. In general, MULTIPLE SOURCES more valid info. However, appropriateness of various sources may change based on purpose of the appraisal1. Sources of variability  raters and rates2. Appraisal alone is not sufficient; must have consequences3. But often treated as event instead of process  poor follow up, insufficient resources4. 360 feedback has evidence for validity III. Performance appraisal processa. Methodsi. Category method (I.e. absolute)1. Ratings: rate each employee in reference to a criterion or absolute standard2. Rate using graphical rating scales, behavioral observation scale, behaviorally anchored rating scale3. PREFERRED APPROACHii. Comparative method (i.e. relative)1. Ranking: order employees from first to last2. Forced distributioniii. Narrative method  critical incidents, essays, field reviewiv. Work sample method1. Employees do a sample of their job and are evaluated based on competence2. Can be done before or after hireb. Graphic rating scalesi. Rating format that displays items graphically from high to lowii. Success depends on quality dimension definitions and accurate scale anchorsc. Behavioral observation scalesi. Rating format that includes behavioral anchors describing what worker has done, or might be expected to do in a particular duty aread. Best practicesi. A number of rating formats to choose from ii. Key characteristics of good formats1. Dimensions: being assessed are well-defined, behaviorally based, and relevant to the job2. Response category anchors are behavioral based, relevant, and accurately placed3. Method used to assign rating or rankings to individuals and interpret them is clear and unambiguouse. Observed structure of performancei. Despite multidimensional nature of performance, observed structure is usually single factor1. Why? Rater errors is one reasona. Halo: leniency tendency for an individualb. Primacy/recency efects: overweighting events which occurred first or lastc. Central tendency, leniency, severity: tendencies to rate all people as average, high, or low, repectivelyd. Contrast efect: performance of others afects judgment of current rating targete. Rater bias: performance irrelevant biasesii. Minimizing rater error1. Focus on the ratingsa. Diferentiating traitb. Using graphical rating scales2. Focus on the ratersa. Provide trainingb. Rater error training: raters can avoid common rater errors if they are made self-aware of them  does reduce halo error but does not increase accuracyc. Dimension training: raters can better identify dimensions if trained on what behaviors these dimensions involve  does reduce halo and rating accuracy, does not improve severityd. Frame-of-reference training: raters can be more consistent in ratings if they understand the context for providing the rating. Provide info on multidimensional nature of rating on a standardized model, provide feedback on practice performance  does improve accuracy, does not improve halof. Best practices for performance standardsi. Standardized and uniformii. Formally communicatediii. Provide prompt notice of performance deficiencies AND opportunitiesto correctiv. Employees should have access to their reviewsv. Provide methods to contest reviewsvi. Use multiple, diverse, and unbiased ratersvii. Required thorough, consistent documentationviii. Keep records using performance management


View Full Document
Download Performance Appraisal
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Performance Appraisal and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Performance Appraisal 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?