After 23 years, it’s certainly not for the first time that I’m rethinking assessment. In fact, it’s more like 32 years, since I was a TA in grad school and earned money as a grader in both high school and college.
Grading is massively time consuming and increasingly depressing. I get such joy out of creating both materials and active spaces for my classes, and then have to grade work that demonstrates either cognitive inability or lack of studying, sloppiness, or misunderstandings of what I requested. I comment, I give feedback, but see little improvement, but the real problem is I have 200 students. I just can’t get as individualized as I would like.
The meta-problem is that the assessments aren’t teaching the students. I am a teacher. Everything I do should teach the students. When I’m teaching, I’m happy. And I don’t mean “direct instruction” — any time I’m involved in their learning, things are good. Assessment seems like judgment. Even when I give them study guides containing all the multiple choice questions, pre-tests, practices, weekly skill review, I still have to write too many bad scores. I don’t want to judge their work as inferior or superior. I realize I’m judging just a segment of what they produce in this world, and that I’m forbidden from stepping over that boundary.
I know what you’re thinking. Use rubrics! I do use rubrics, and even have my online students do self-assessments of how well their work is fitting the rubric. They do learn something from those (usually that they haven’t been keeping up) but that’s only twice per semester. The rest of it is seven quizzes, partly multiple choice and partly essay. The essays take so much time and the problems with them are so consistent that I could use a drop-down menu to make comments. Writing a decent historical essay is the main skill we practice all semester. But to let them know if they’re doing it right, I have to grade them. One at a time. And see my work reflected in their grades, which, regardless of pedagogy, create a pretty standard bell curve all on their own.
I’m a college instructor. I make my own tests. When I was in university and in graduate school, each class had a midterm, a research paper, and a final exam. That was it. That didn’t work for community college students, so we have quizzes every two weeks. Please understand I came to this method after years of trying other ways. And I know that not all students are there to obtain grades, good or bad. They may just be passing through my class on their way to other things.
So I’m working on reconceptualizing, again. I have studied constructivist and connectivist methods, and have come to the conclusion that I can’t do many of those wonderful things with 200 students per semester, at least not as the guiding force of my class. I also do believe in core content and core skills for my discipline — I’ve tried to jettison that old-style thinking but can’t because I really believe that knowledge of historical facts and methods has intrinsic value. I don’t think most of the “make history fun” exercises (i.e. use Twitter to recreate the Crimean War) are for me — they have little to do with historical skills.
So I’m talking to people online, and looking at things like the NY Times article about studying, and discussion of John Hattie’s work. It was a relief to note that Hattie shows that student achievement is determined mostly by rapid feedback, students’ prior cognitive ability, and instructional quality. This last is defined through the abilities of an “expert teacher”:
1. can identify essential representations of their subject,
2. can guide learning through classroom interactions,
3. can monitor learning and provide feedback,
4. can attend to affective attributes, and
5. can influence student outcomes
The first three I get no problem. #4 I’ve been trying to convince colleagues of for years, and it’s the one I feel least equipped to do (aside from extreme emoticon use). The last seems circular — you are a good teacher if you can get the students to learn.
The CMS doesn’t matter, but certainly computers are good at immediate feedback for multiple choice questions. On my quizzes, they have to wait till I grade the essay, and by then it might have been a week since they’ve seen the quiz. If Matheson is right, by that point they might care only about the score. Students don’t often read comments on tests in class (they come up and show me a test on which I’ve written copious comments, saying “why did I get a C?”). They don’t read them in my online classes, partly because they don’t know how to see the comments, even though I tell them how in every announcement that their tests have been graded, in a video announcement, and on the FAQ (I just got a message from a student today telling me she’d just figured out how to see her graded quiz, and that she’s sorry she did the same thing on this essay as the last one since she can now see my comments).
So I could break up the quizzes, with weekly multiple-choice quizzes that have immediate feedback. The essay could be an Assignment in Moodle, so that I could use a drop-down scale to tell them how the essay performed, if they’re interested.
But I wonder if there should be more. If the assessment is for teaching, then, like discussion, it could be the course (next post). In the meantime, providing immediate feedback more often, and automating essay feedback, might help. But I have much more work to do on this issue.