Rethinking assessment

After 23 years, it’s certainly not for the first time that I’m rethinking assessment. In fact, it’s more like 32 years, since I was a TA in grad school and earned money as a grader in both high school and college.

Grading is massively time consuming and increasingly depressing. I get such joy out of creating both materials and active spaces for my classes, and then have to grade work that demonstrates either cognitive inability or lack of studying, sloppiness, or misunderstandings of what I requested. I comment, I give feedback, but see little improvement, but the real problem is I have 200 students. I just can’t get as individualized as I would like.

The meta-problem is that the assessments aren’t teaching the students. I am a teacher. Everything I do should teach the students. When I’m teaching, I’m happy. And I don’t mean “direct instruction” — any time I’m involved in their learning, things are good. Assessment seems like judgment. Even when I give them study guides containing all the multiple choice questions, pre-tests, practices, weekly skill review, I still have to write too many bad scores. I don’t want to judge their work as inferior or superior. I realize I’m judging just a segment of what they produce in this world, and that I’m forbidden from stepping over that boundary.

I know what you’re thinking. Use rubrics! I do use rubrics, and even have my online students do self-assessments of how well their work is fitting the rubric. They do learn something from those (usually that they haven’t been keeping up) but that’s only twice per semester. The rest of it is seven quizzes, partly multiple choice and partly essay. The essays take so much time and the problems with them are so consistent that I could use a drop-down menu to make comments. Writing a decent historical essay is the main skill we practice all semester. But to let them know if they’re doing it right, I have to grade them. One at a time. And see my work reflected in their grades, which, regardless of pedagogy, create a pretty standard bell curve all on their own.

I’m a college instructor. I make my own tests. When I was in university and in graduate school, each class had a midterm, a research paper, and a final exam. That was it. That didn’t work for community college students, so we have quizzes every two weeks. Please understand I came to this method after years of trying other ways. And I know that not all students are there to obtain grades, good or bad. They may just be passing through my class on their way to other things.

So I’m working on reconceptualizing, again. I have studied constructivist and connectivist methods, and have come to the conclusion that I can’t do many of those wonderful things with 200 students per semester, at least not as the guiding force of my class. I also do believe in core content and core skills for my discipline — I’ve tried to jettison that old-style thinking but can’t because I really believe that knowledge of historical facts and methods has intrinsic value. I don’t think most of the “make history fun” exercises (i.e. use Twitter to recreate the Crimean War) are for me — they have little to do with historical skills.

So I’m talking to people online, and looking at things like the NY Times article about studying, and discussion of John Hattie’s work. It was a relief to note that Hattie shows that student achievement is determined mostly by rapid feedback, students’ prior cognitive ability, and instructional quality. This last is defined through the abilities of an “expert teacher”:
1. can identify essential representations of their subject,
2. can guide learning through classroom interactions,
3. can monitor learning and provide feedback,
4. can attend to affective attributes, and
5. can influence student outcomes

The first three I get no problem. #4 I’ve been trying to convince colleagues of for years, and it’s the one I feel least equipped to do (aside from extreme emoticon use). The last seems circular — you are a good teacher if you can get the students to learn.

So I’ve been focusing on the feedback idea. Then on Tuesday, as part of #edchat Twitter discussion, Julian Ridden (@moodleman) retweeted something from Colin Matheson:

The CMS doesn’t matter, but certainly computers are good at immediate feedback for multiple choice questions. On my quizzes, they have to wait till I grade the essay, and by then it might have been a week since they’ve seen the quiz. If Matheson is right, by that point they might care only about the score. Students don’t often read comments on tests in class (they come up and show me a test on which I’ve written copious comments, saying “why did I get a C?”). They don’t read them in my online classes, partly because they don’t know how to see the comments, even though I tell them how in every announcement that their tests have been graded, in a video announcement, and on the FAQ (I just got a message from a student today telling me she’d just figured out how to see her graded quiz, and that she’s sorry she did the same thing on this essay as the last one since she can now see my comments).

So I could break up the quizzes, with weekly multiple-choice quizzes that have immediate feedback. The essay could be an Assignment in Moodle, so that I could use a drop-down scale to tell them how the essay performed, if they’re interested.

But I wonder if there should be more. If the assessment is for teaching, then, like discussion, it could be the course (next post). In the meantime, providing immediate feedback more often, and automating essay feedback, might help. But I have much more work to do on this issue.

7 thoughts on “Rethinking assessment

  1. Do you use Standards Based Grading? I relate to your struggle to get students to read your feedback rather than focus on their score. I find that SBG, when communicated well, helps students see their grade as a product of what they are learning. They can begin to see that by focusing on better understanding and demonstrating specific topics within the class, they can improve their grade. I think SBG has the potential to direct student attention toward what they are learning instead of how many points they are earning. I’d be interested to hear your thoughts on this.


    1. Jenny, this looks very useful to my quest, particularly the idea that assignments engender more learning than exams. I did not know about it so thank you!


  2. I continue to refine my own list of principles of effective instructional practice. Here’s as close as anything to what I have currently.
    * Start with clear learning outcomes
    * Devise assessments based on the desired outcomes
    * Understand where students are starting and build from there
    * Provide time for applied practice in multiple contexts (especially authentic ones)
    * Give timely/actionable feedback
    * Foster peer learning & collaboration
    * Provide time and space for metacognition/reflection

    Couple thoughts –

    1 – I have played with phrasing this list not in terms of instructor behaviors but in terms of the characteristics of the learning environment, of which the instructor is a (co-?)designer/creator and in which the instructor is the/a facilitator. It’s worth doing just to play with the roles and responsibilities a bit.

    2- I think the last point on my list – metacognition – is really important to the assessment discussion. As long as assessment remains, to the students, an evaluative exercise – i.e. one focused on summative assessment – “did I pass?” – then it is difficult to help students acquire the habits of mind that we really want – ones that welcome challenge, that see failures as learning opportunities, that see the value of process at least as much as product, etc. By building in reflective opportunities, I think both instructor and student might be more likely to think of assessment less in terms of right/wrong pass/fail, and more in terms of lessons learned.


  3. Jim, I’m thinking I need reasons for some of these. I have learning outcomes, but they are skill-based, and I’m starting to think we’re missing the larger point of fostering the intellect. Your “devise assessments based on the desired outcomes” is good phrasing, but I wonder whose outcomes? Mine, the college’s, the student’s? And fostering collaboration needs a purpose – I admit I’m getting wary of collaboration for the sake of collaboration. Wow, it sounds like I’m questioning everything, huh? πŸ™‚


  4. In my 10+ years teaching online, I have tried myriad ways to enhance the assessment process. Some forms of technology seem to work better than others. I use a visual narrative program called Pixetell to augment my textual responses to student submissions. It takes about as long to create as does a “regular response” and students are extremely thankful. Jing and Camtasia and other screen capture programs are out there and I have used them, but you then have to upload to a server, create a background brand so you are not “selling Screencast” and the video and audio quality is not that great…

    In contrast, Pixetell is very easy to use and when you render the little movie it automatically uploads to the server…I use the Pro version and it is about $13 or so a month for unlimited use…I think it is priceless…my recent annual review went off the charts (and I did not change much of anything other than using Pixetell) and if student responses are any indication of success, then it is surely working…besides all of that, it is FUN to use…as for an assessment device, it is apparent that visual learners, in particular, benefit tremendously from having strengths and weaknesses pointed out to them…hearing my voice also helps diffuse the f2f void…here is an example of what I use to introduce new students to my online psych course…when you make the movie initially, you can call it something like, Visual Narrative for Class and then the students just click on it or you can embed the code right into Announcements or Discussion threads if your LMS allows for it, e.g., BB 9.1…



Comments are closed.