Professors Voice Concern over Gender Discrepancies in Course Evaluations

Professor of sociology Sheryl Skaggs said including more qualitative questions may improve the current method of course evaluations. Photo by Anjali Venna | News Editor

Advertisement


Female faculty note differences in ratings, student behavior toward male colleagues

Students are encouraged to complete course evaluations at the end of every semester, but factors such as race and gender may influence how students review their professors.

Lauren Santoro, an assistant political science professor, received comments in her course evaluations and interactions with students that caused her to question if those remarks were related to her gender.

“Research has shown that female faculty and minority faculty members are rated lower than white male faculty members. So the bias against, I feel like myself or other female faculty or minority faculty, is going to be implicit, which means that it’s not really intentional,” Santoro said. “It’s not like I can essentially point to specific instances where the discrepancies are widespread. It’s just you wonder, and I know that I am penalized for certain things.”

One such instance, she said, are complaints about her sticking to syllabus policy.

“I’m a pretty big stickler on deadlines and policies. To ensure fairness for all my students, I don’t make exceptions,” Santoro said. “I find in my teaching evals students talk about how that’s unfair or that’s not appreciated and they can’t believe that I’m such a stickler about certain policies. I don’t know, but do I get penalized more so for sticking to syllabus policy than my male colleagues?”

She said there are more references to ‘teacher’ and ‘instructor’ instead of ‘professor’ in her evaluations, and she read comments that question her qualifications.

“When you have students that come in and complain about a grade or they debate you about a grade you always wonder if they feel like they can debate their grade with me because I am a young female professor,” Santoro said. “In my discussion with my male colleagues, the things that I’ve had to deal with, they don’t deal with.”

One example, she said, is she has some male students that will debate and complain about a grade or late assignment with her. This occurs a majority of the time in her state and local government class, which is a mandatory course for all students.

“There have been some instances I’ve dealt with that I’ve asked my male colleagues about and they’ve told me ‘we’ve never had to deal with that,’” Santoro said.

Santoro said she has also received negative comments through informal methods such as Rate My Professor. One particular post commented on her pregnancy.

“In these informal, anonymous channels, that implicit bias does become more explicit. I try not to look at those other sources but if you read them they are completely disparaging,” Santoro said. “In the fall of 2018, I taught right up to my due date and so there was a comment disparaging me for panting too hard while lecturing. I feel like students who are commenting on your appearance, are they really able to evaluate you effectively or unbiasedly?”

Santoro has not gone to administration about this because she said there’s nothing that’s been so egregious that required her to go to her program head.

“I think that a lot of this is part of the job and I think it’s unfortunate that I have to deal with it, and until students see more young female faculty in front of them it’s going to happen, unfortunately,” Santoro said. “There is definitely some understanding and sympathy. I know that people in my program are aware of the implicit bias against women in course evaluations, but I don’t know the extent to which that goes into our (overall) evaluations. I think that, and I can only speak to my program, that the EPPS college is going to be supportive were I to raise (the concern).”

Santoro said she believes there’s a role for student evaluations in the university but that we need to be aware that they have limits. In her courses, she has students give critiques to improve her class, but these critiques are not anonymous.

“I appreciate students’ feedback and  I try to get it from my classes in ways that are constructive and helpful for me,” Santoro said. “Regarding course evaluations, I think there’s a role for them to play, but if we know that they’re biased against certain members of academia, I think that bias should be incorporated in our female and minority evaluations.”

For Larissa Werhnyak, professor of interdisciplinary studies, critiques are helpful when they discuss details such as the usefulness of the books she assigned, the pacing of the course, and the amount of time spent on topics.

“Anything that really goes to the substance of the course that I may not be seeing is really useful,” Werhnyak said.

“In general, if you’re making comments from a place of ‘I really was here to learn and here’s something that maybe would have helped me to learn better,’ if you can say that with a 100% confidence then that is a useful comment.”

The way course evaluations are used in the hiring, promotion and tenure process of faculty members varies wildly, Wernhyak said. When she served on hiring committees, she considered course evaluations, but was a lot more interested in a sample assignment, syllabus or teaching statement that the candidate submitted. 

“If we interview that person they will do a teaching demonstration, so seeing them on the spot tells me a lot more,” Werhnyak said. “The way that evaluations play into that is that it’s one of the main ways to improve. If I’m seeing somebody who’s a good teacher, then part of that is having read their evaluations and implementing pieces of constructive criticism from the past.”

In September 2019, the American Sociological Association released a statement on the use of student evaluations of teaching (SETs).

“A scholarly consensus has emerged that using SETs as the primary measure of teaching effectiveness in faculty review processes can systematically disadvantage faculty from marginalized groups,” the ASA said.

Sheryl Skaggs, a professor of sociology, said emphasis on raw scores of the evaluations is one problematic effect of course evaluations.

“I think across the board what ends up happening — whether it happens at the university committee that oversees the tenure and promotion cases or within programs or departments within schools — is that the peer evaluators look at a raw score and say ‘Oh they’re not a good teacher because they got a 3.8 instead of five, so that means they’re not as good of a teacher,” Skaggs said.

The second thing the peer evaluators do, Skaggs said, is look at the grades of students in their courses.

“If you place all the emphasis on the average score or average gpa for a course then I think you’re missing a lot of important information. Like how difficult is that course? Is it a required course, is it known to be a challenging course to students?” Skaggs said. “I think that kind of stuff is missed in these basic teaching evaluations, and unfortunately I would say (at) UTD we’re still doing a lot of what we were doing 20 or 15 years ago.”

In their statement, the ASA suggests five examples of how SETs can be used more effectively. Changing the rating scale from five points to ten points is one way to reduce the amount of bias, Skaggs said.

“A lot of it, too, is paying less attention to the personal characteristics or traits of the faculty members and more about the contents of the course,” Skaggs. “Universities are being almost lazy in relying on these kinds of measures largely because they’re cheap and easy to administer, but they’re neglecting things like response rates and how the measures are actually used to assess the quality of ones teaching. (They pay) less attention to putting in more qualitative types of open-ended questions, which means it would take more time to look at and assess the quality of the teaching.”

This means moving away from personality traits, and moving toward getting students to discuss what was useful in the course or what teaching strategies could be implemented to improve the course, she said.

“I encourage the university to think about if this is truly important, that faculty are effective teachers, then there should be more effort to improve the way that teaching is evaluated and not just relying on a cheap easy type of instrument,” Skaggs said.


Advertisement


Leave a Reply

Your email address will not be published. Required fields are marked *