Racism and Sexism in Teaching Evaluations

These concerns prompted Shorter to delve into the research surrounding teaching evaluations. He discovered a wealth of peer-reviewed papers spanning decades, all pointing to the same disturbing trend: gender and racial biases in student evaluations. Women consistently received lower ratings than men, and younger women were often judged less professionally than their older counterparts. Women of color faced additional challenges, being rated as less effective than white women. These biases, based on gender, race, and even seemingly unrelated factors like the time of day a course was taught, raised serious questions about the validity of using student evaluations as a sole measure of teaching effectiveness.

One particularly striking finding was a meta-analysis by Rebecca Kreitzer and Jennie Sweet-Cushman (2021), which demonstrated that evaluations tended to be higher for courses with less workload, for electives, and for classes where students were provided with treats like cookies or chocolate. Furthermore, Bob Uttl, Carmela A. White, and Daniela Wong Gonzalez (2017) found “no significant correlations between … ratings and learning,” questioning the effectiveness of these evaluations in assessing actual educational outcomes.

The American Sociological Association (ASA) recognized these issues and recommended in 2019 that student evaluations should not be used as the sole basis for merit and promotion decisions unless part of a broader, more holistic assessment. Some universities, such as the University of Southern California, the University of Oregon, and the University of Nebraska at Lincoln, have already taken steps to combine student evaluations with other forms of assessment in personnel decisions. The ASA’s stance has garnered support from nearly two dozen professional organizations.

The legal implications of relying solely on student evaluations are also a cause for concern. In a case at Ryerson University (now Toronto Metropolitan University) in 2009, an arbitrator, William Kaplan, acknowledged “serious and inherent limitations” of student evaluations, describing them as “imperfect at best and downright biased and unreliable at worst.” This raises the possibility of legal challenges if colleges continue to use these evaluations as the primary criterion for decision-making.

In response to these issues, Shorter’s own department at UCLA decided to prioritize fairness and reliability. They chose not to rely on student evaluations for job security and instead implemented a system that allowed faculty members to use peer-assessment and self-evaluation, with documented revisions to pedagogical statements. This approach aligns with the principle that academics should be assessed by their peers and experts in their respective fields rather than relying solely on student evaluations.

In conclusion, David Delgado Shorter’s article highlights the urgent need to reconsider the role of student evaluations in academic assessment. The evidence he presents suggests that these evaluations are plagued by biases and may not accurately measure teaching effectiveness. It’s time for institutions of higher education to adopt more comprehensive and fair evaluation methods that better serve both faculty and students.

Leave a Reply

Your email address will not be published. Required fields are marked *