It's that time of semester again for American university students. Finals week is upon us. For some it has already passed, like a whirlwind of caffeine, all-nighters, alcohol, tears, and paper. Others are in the midst of it (I am in the midst of it, and writing this piece by way of staving off the agony of writing my next paper). Either way, when Finals week ends, summer begins - but not before you've been asked to evaluate your professors' performance.
Student evaluations. I'm not generally a fan of them as a concept. They're fine in theory, as long as you keep in mind that in theory students know what makes good teaching. Let's take a moment to consider student evaluations.
The idea behind student evaluations is well-intentioned. Instructors should have some form of evaluation, and since students interact with their instructors more than anyone it simply makes sense to administrators that students should evaluate their instructors.
Students are also seen as having insight into the efficacy of teaching. After all, once you've reached college you're looking at some thirteen years of being in a classroom and experiencing teaching.
Evaluation is also a way to get an idea of what students perceive of their instructors. Perhaps an instructor grossly violated ethics in the classroom and the anonymous evaluation enabled a student to report them - since instructors do not receive the evaluations directly, but after they have been delivered to the department for review.
Sounds good, doesn't it?
Think about the best way to gauge students' perception of their instructors. You're probably thinking let them write what they think. And that would be best, but that isn't always something the administration permits. Rather than give free-form evaluations, though, where the students can write what they feel, many universities use a scaled form of evaluation - giving statements and asking the student to rate from Strongly Agree to Strongly Disagree or on a 1-5 scale or some other such method. Some evaluations are free-form, or are hybrid and include space for students to write, but the written remarks are generally not quantified by administration. Anything on a scale is, however, which creates impressive percentages and possible charts and graphs for administrators to coo over.
There's also the fact that student feedback doesn't tell us anything useful. Evaluation responses often involve a facile level of feedback. Writing things like "The subject was boring," or "The lecturers' presentation skills were lacking," or "Give better feedback" is a fundamentally meaningless exercise. Boringness is subjective, saying that presentation skills were lacking does not address how in any way, and what is meant by better feedback will forever remain mysterious. Vague and unhelpful feedback like this is depressingly normal.
Not only can it simply be useless, but it can be utterly clueless as well. Comments on course content are often similarly unhelpful. If a student writes that there's too much feminism in a course on gender studies, said student has really let the whole point fly over their head.
Bias, obscurity, and sexism come out in evaluations too:
Perhaps one of the most damaging aspects of student feedback is inviting students to comment on their teachers. Of course it's nice occasionally to read that one is a "brilliant teacher" but this hardly gives much insight. Comments are often confusing, obscure or downright sexist. For example:
"Get a new lecturer."
"One of the most biased classes ever. The teacher was certainly well-informed, but had little to no consideration of other people's opinions."
As a feminist, I am often accused of bias: there is no equivalent for male colleagues who adopt Marxist, critical or cultural perspectives.
Students' ideas about what academics ought to be like often favour older men with beards. Most universities will have at least one older man whom students will venerate as a "legend".
And at my university, students are given a bubble to fill with their expected grade. Last semester every student who filled that bubble chose A or B. Which might set up expectations, especially if the students can't or won't be bothered to look at the grades they got on assignments before deciding what they expect in the class.
The idea behind student evaluations holds up, if you assume that students know how to tell good teaching from bad (and not conflate entertaining with good), care enough about the material to care about the delivery in the first place, and don't have unrealistic expectations. But none of these assumptions necessarily holds.
Just because a student has been a student does not mean they know good teaching from bad. Being subject to teaching does not make one an expert in teaching, nor in the material taught. Just like having been a patient for a tonsillectomy doesn't make me an expert in being a doctor, or medical procedure. Especially since I was unconscious and unable to pay attention when I had my tonsillectomy, just as many students are unconscious of what happens in the classroom because they avoid paying attention (I remember explaining, on the board with examples, what an annotated bibliography is and being asked after class and by email "What's an annotated bibliography?" by one third of the class, all of whom were present for the explanation).
Nor can we guarantee that all students care. In courses like the one I teach, a Gen. Ed requirement, most don't care. They just want to get the requirement out of the way. Even in courses in a student's program you can't guarantee interest. I wasn't the most interested in my Enlightenment literature course as an undergrad (except the part where we read Paradise Lost) - I'd have much rather been in a medieval literature course. I probably wasn't the most attentive as a consequence.
And even if those issues aren't on the table, it seems that students and instructors often talk past each other. Students see an A as good, a B as okay, and a C as bad. As an instructor I see an A as exceptional, blow-me-away, a B as good, and a C as following the instructions and doing okay. What we mean as instructors and what students understand are not necessarily the same thing, and a student may resent their grades. We've conditioned students to believe they need to go to college to make something of themselves, and we've also made the A the goal for students rather than what they should perceive it as: a measure.
Perhaps if we relied less on student evaluations* and opted for more peer evaluations (or for graduate T.A.s like me, classroom evaluations by faculty), we might also make things better. I know that I'm learning how to teach, but it's a lot of trial and error, and my students may or may not recognize which is which - I try to turn the error entertaining once I cotton on to it being an error so at least something comes though.
I mean, if a class is bad and it's the fault of the instructor, definitely reflect that in the evaluation. Don't get me wrong, I don't want to suggest we leave evaluations that are all sunshine and puppies. I left my first scathing evaluation this year precisely because the course was not what it was advertised to be, the instructor clearly hadn't the slightest clue about any of the material even in broad strokes (except the parts that had nothing to do with the stated purpose of the class), and the instructor also entered the classroom three times during the course of evaluations to try and hurry us up. That's a major ethical violation right there, and I recorded them in my evaluation. Sorry, but I'm not sorry I left a scathing review of the course. It was horrid, and I wouldn't wish that instruction on anyone in the future.
Sometimes you just have to know the line between this:
And try to stick more to the former than the latter.
*Evaluations in a graduate class are probably generally pretty useful. We have some experience in the classroom, and we're generally less shy than undergraduates.
Image Credit Book Ghostwriting Blog
Image Credit When In Academia