One Study (Too Many) in Online Interactivity

Thanks to Jim Julius, I became aware of a Forbes blog post about a new study: Interaction in Online Courses: More is NOT Always Better by Christian and John Grandzol of Bloomsburg U of Pennsylvania.

They sampled 359 “lower-level” online business courses, which is a suitably large number for such a study (usually n=7 or something stupid in studies like this). As cited in their abstract:

Our key findings indicate that increased levels of interaction, as measured by time spent, actually decrease course completion rates. This result is counter to prevailing curriculum design theory and suggests increased interaction may actually diminish desired program reputation and growth.

I am not going to defend “interactivity” as an indicator of either engagement or success in online classes, and workload-wise it’s silly to balk when I could be doing less work interacting with students. But a couple of things were scary about this study: one was the method, the other in the explanation for their surprising conclusions.

Their method was to use the course management system to get the stats on how much students were interacting with various parts of the class. This is, as they note “an approach that has not been attempted”. Well, there’s a reason for that. Time recorded by a CMS is not at all indicative of eyeball time spent on a page, much less absorption or participation. They focused primarily on the discussion boards as indications of “interactivity”, noting a negative correlation between time spent in such activities and course completion.

I am not a statistician, but I know that you can open any web page and just walk away. You can also spend an hour searching through a forum that features avatars (like Moodle’s) to look at the prettiest girls in the class. You can open a course tab and then open another tab to watch surfer videos on YouTube. You can forget to log out. In all these cases, the timer keeps running, yet no interaction (or even attention) is taking place.

See the following screenshot of the time it took my students to do a single test — the system shows the test as open unless they close it:

Did one student actually write a test for four days? More valuable indicators might include number of replies to posts, or even faculty assessment of who the most active students really were!

Then we get to the implications. One manages to insult community colleges, as if we are providing a lower form of education:

Second, Rungtusanatham and colleagues (2004) proposed that higher level courses (e.g. MBA level) require more interaction levels; introductory courses need little interaction. Our sample consisted of community college courses. Do they require higher levels of interaction when the content may not need interpretation or further analysis?

Perhaps it is slamming business classes in particular, but my history classes require rather intense levels of interpretatation and analysis — just ask my students.

Then they got into class size issues:

Similarly, we did not find a significant correlation between enrollment size and online course completion rates. This finding indicates that calls for enrollment caps may be more arbitrary than fact-based.

Later in the paragraph you discover that they removed “very large” sections as being “outliers” and that “a majority of classes in this study had between 14 and 30 students”. Mine have 40. They say “perhaps” significant results would be found with larger classes. I have a colleague who says it doesn’t matter how many students she starts with, she ends with 27. I tend to find that close to true in my own case, so I’d say here that small classes may not have a correlation. Interacting with 13 other colleagues is rather different than trying it with 39.

The ending section admits that they couldn’t actually confirm activity, and that their findings might not be “generalizable to other course levels or disciplines”. This makes me wonder whether it’s business students who don’t need as much interaction, even if all the methodological problems are set aside. How much do business students from the mid-West have in common with history students in California? History is a required subject; business isn’t. Could people require significantly more interaction in a required class?

The authors are indeed aware of the limitations of their study, but I remain concerned that this sort of method may be used to justify mass canned courses, taught by people unfamiliar with the subject, or classes where individual students get little feedback or input from instructors. Course completion, the authors acknowledge, may not be a good way to assess success — there are many reasons why students stay or drop a class. I was even less convinced that there was even a connection between interaction (as weak a definition as it was) and course completion. Demonstrating such a tenous link as a negative correlation seems dangerous for online pedagogy.

Perhaps with such studies also, more is not always better.

One thought on “One Study (Too Many) in Online Interactivity

  1. Thanks for reflecting publicly. You’re right on target about the way these kinds of findings could be used. The lack of rigor in research and market-driven hype that’s then used to make educational (policy and practice) decisions is rampant, and yet it’s not new.

    Regarding policy-it’s a matter of keeping one’s head in the sand or not. Neoliberal ideologies dictate policy-making and as such speak mountains for how such discourses gain their currency. Ravitch/Meier blog speaks back to that. http://blogs.edweek.org/edweek/Bridging-Differences/

    Clark’s instructional research reviews are informative regarding practice.
    http://www.cogtech.usc.edu/recent_publications.php

    Like

Comments are closed.