Unformatted text preview:

Grade Inflation And Student Individual Differences asSystematic Bias in Faculty EvaluationsMarie-Line Germain and Terri A. ScanduraThe media has recently exposed that grade inllation is a concern for higher educationin North America. Grade inflation may be due to consumerism by universities thatnow compete for students. Keeping students happy (and paying) may have beenemphasized more than learning. We review the literature on faculty evaluation andpresent a model that incorporates students" individual differences and grade inllationas sources of bias in teaching evaluations. To improve teaching effectiveness, andavoid consumerism in higher education, faeulty evaluations must begin to tocus onstudents and the reciprocal role ot grade inllation in teaching evaluation.Today, faculty arc beitig held account-able for how well they serve the U.S. studentpopulation, and it has becorne commonpractice in universities and college.s forstudents to "'grade" the professors that gradethem. Grade inflation has become an issuein higher education; students' grades havebeen steadily increasing since the l960"s(Astin, 1998). In June 2001. a record 91percent of Harvard seniors graduated withhonors, and 48.5 percent of grades wereA's and A-minuses (Boston Globe. 2001).Grade inilation has been under scrutiny, andthere is a need to address exponential gradeinllation (Berube. 2004). Several studieshave linked grade inflation with studetits'ratings of faculty (Greenwaid. 1997; StumpfMarie-LineGenTiain. Department ot GeneralEducation. City College. Miami. EL. Terri A.Scandura, Department of Management. School ofBusiness .Administration. University of Miami.Correspondence concerning this articleshould be addressed to Marie-Line Germain.Department of General Education. City College.9300 South Dadeland Boulevard. Suite 700.Miami. FL 33156.Aprevious version ofthis paper was present-ed at the Society of Industrial and OrganizationalPsychology meetings. Orlando. FL. (April 2003 )-The authors would like to thank Chris Hagan andClara Wolman for their helpful comments on anearlier version ofthis paper.&Freedman. 1979). According Ffeffer andFong (2002): "Grade inflation is per\asivein American hitzher education, and businessschools are no exception" {p. 83).Students* ratings of management facultynow serve dual purposes. First they providefaculty with feedback on teaching effective-ness. They are also used for faculty reap-pointment, protnotion and/or pay increasedecisions (Jackson. Teal, Raines. Nansel,Force.&Burdsal. 1999). Yet,Scriven(1995)identified several constiiict validity problemswith student ratings of instruction, one of thembeing student consutiieristn. Consumerismresults in biasdueto infonnation not relevantto teaching competency, but important tostudents such as textbook cost, attendancepolicy, and the amount of homework. Dueto the impact on tenure and career, facultymight try to influence student evaluations,a phenomenon referred to as "marketingeducation." or even seduction (Simpson &Siguaw. 2000). Some have bccotnc alien-ated frotn the process of leaching evaluationentirely. Professors who have become hostileto evaluations (Davis. 1995) often do not uselhe feedbackthey receive in constructive ways(THomniedieu. Menges & Brinko, 1997).Faculty Evaluations asPerfonnance AppraisalsSince .student ratings of faculty teachingeffectiveness are used as one component of58Grade Inflation . . / 59faculty evaluation, it seems reasonable toconsider these instruments a.s perlbrmanceratings. As such, they are subject to a numberof pcssible biases, as bas been shown in theliterature on rating aeeuracy in Industrialand Organizaiioiial Psyehology (Campbell.1990; Murphy & Cleveland. 1995). Anumber of studies have indicated problemswith the reliability of performance ratings(Christensen. 1974; Wohlers & London.!989). As noted by Viswcsvaran. Ones &Schmidt (1996). "...for a measure to haveany research or administrative use. it musthave some reliability. Low-reliability resultsin the systematic reduction in the magnitudeof observed relationships..." (p. 551). Theaccuracy of performance evaluation ratingshas been challenged as well (Murphy. 1991).This research has led to recommendationsfor improvemeiil of rating accuracy. Forexample. Muiphy. Garcia. Kerkar, Martin andBalzer (1982) reported that the accuracy ofperformance ratings is improved when ratingsare done more frequently. However, facultyevaluations, in most cases, are only at the endofthe course, leaving greater possibility forerror. Other research has reported problemsdue to individual differences such as leniencyor stringency (Bernardin, 1987; Borman.1979; Borman & Hallam. 1991).Construct validity relates to the levelof correspondence between performanceevaluation and tbe actual performance of anindividiialonthejob. The constiTJCt validity of(lertbmiance ratings has rarely been examinedin the literature (Austin & Villanova. 1992;Lance. 1994). Recently. Scullen. Mount andJudge (2003) examined the construct validityof ratings of managerial performance usingtwo samples and Iburdifferent rating sources(boss. peer, subordinate and self)- Theirresults indicated that lower order factors(technical and administrative) skills werebetter supported in by their data than higherorder factors (contextual performance: hu-man skills and citizenship behavior). Theyconclude "... that the structure of ratiniis isstill not well understood" (p. 50). One mightargue that teaching effectiveness is as com-plex or perhaps even more complex as thecontextual performance aspect of managerialperformance. Construct validity must startwith a clear definition ofthe construct of in-terest (Murphy. 1989). in the case of facultyevaluations, there is no clear definition ofthecriterion of effective teaching upon which todevelop rating instnjments.The Criterion ProblemResearch has shown that there is noone correct way of teaching (Joyce & Weil.1996). Marsh (1982) found that the singlemost important factor affecting studentevaluations was the amount learned, andthe least important was the course difficulty.Researchers seem to agree that good facultyevaluations should reflect the amount learnedin a class. However, not all students agreethat learning is the most important factor inevaluating an instructor. Affect oi I ikeabilityfor example, may be more important thanknowledge imparted. Faculty evaluations"imperfections are perhaps due to the factthat they utilize fallible measures {Guilford.l954;Nunnally.


View Full Document

UNCW EDN 523 - Grade Inflation

Download Grade Inflation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Grade Inflation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Grade Inflation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?