Results from a recent survey and two studies indicate that evaluations by students of courses and their instructors should be interpreted with care.
A survey of its members conducted by the American Association of University Professors (AAUP) found that while 69 percent of respondents said they saw a need for student course evaluations, only 47 percent deemed the survey regimes at their institutions to be effective, according to a report in Inside Higher Ed (with which AAUP has a partnership). The evaluations, said Craig Vasey, who sits on the committee that issued the AAUP report, amount to little more than “student satisfaction surveys.”
Respondents to the AAUP survey numbered some 9,000 who answered an invitation sent to 40,000 of the organization’s members.
The AAUP heard similar complaints to those reported by PSC members about low response rates to online evaluations forms and an outsize reliance on the evaluation forms by administrators when assessing faculty performance. Faculty at institutions that use online forms estimated the rate of return at between 20–40 percent, while those at colleges and universities where paper forms are used estimated response rates at around 80 percent.
Interpret With Care
In a report on the survey issued by AAUP’s Committee on Teaching, Research and Publication committee, the writers noted that some women and people of color reported receiving comments reflective of prejudice. The “abusive, bullying effects of anonymity that are today pervasive on websites… [make] their way into student evaluations,” the committee reports.
‘Bossy’ V. ‘Brilliant’
While based on anecdotal evidence, that assertion is in line with the findings of Benjamin Schmidt, a professor at Northeastern University, who analyzed 14 million reviews on the Rate My Professors website and found that men were more likely to be described as “brilliant” or “awesome,” while women were more likely to be characterized as “bossy” or “annoying.”
Based on the results of its survey, the AAUP committee also calls for the end of the use of numerical scores drawn from student evaluations as a measure of teaching effectiveness, as such scores fail to account for factors beyond faculty control. For instance, a report by Philip Stark, a statistics professor at UC Berkeley, detailed how student satisfaction varies by size of the class, the course level and the strictness of a teacher. The Berkeley study goes on to recommend better ways to incorporate student evaluations in assessments, from not averaging scores to relying more on peer classroom observation. The PSC contract requires peer observations for faculty without tenure at least once per semester, with certain guarantees of fair procedure. The contract stipulates that such evaluations must last a full classroom period, and that faculty must be given at least 24 hours’ notice.
Though faculty have raised a range of concerns regarding the current design and use of student evaluations at CUNY, they aren’t asking for their elimination. Faculty interviewed by Clarion said the assessments can provide useful feedback and have helped them determine whether a text was accessible or instructions for an assignment were clear.
Student Voices Needed
“I think student evaluations are very important. It’s essential that student voices are heard and valued,” commented Nancy Stern, chair of the Teaching Learning and Culture department at City College. “But student views are not the only measure of good teaching or learning.”
Student Surveys Under Scrutiny