Monday, October 01, 2012

Instructional kaizen toolbox: Post-test surveys as metacognitive tool

Recently I gave an exam. The first of the semester in my quantitative methodology and research design course for geographers. It follows a general flipped course format, though modified by the fact that the lecture time occurs in a lecture hall with chairs bolted to the floor, facing forward. In theory 75 students are registered, but in practice around 50 show up. We are in the lecture hall three days a week for 50 minutes each time, plus there is a two-hour lab weekly. My TA is phenomenal, and knows the material cold.

Let's examine this case in an attempt to flesh out one or more Instructional Kaizen techniques. 

================================================

 I'm almost done marking the exams, but so far some general patterns have emerged.

Learning outcomes
Some students appear to have done well. Of the names I recognize, these students answer questions every time we meet or almost every time. Some also send me emails or post on the course website's forum.

Some students appear not to have done so well. I recognize fewer of their names. Unless they are repeating the course, in which case I then recall the names from last year.

This pattern is not shocking. In fact, its painfully common. It leaves me asking what, if anything, can I do to alter this empirical regularity. And by empirical regularity, I mean it happened twice using essentially the same material. It happened last year. It happened this year. If I teach this course next year and I don't change anything, I predict it would happen again. Why? Let's exam the reasons.

Reason 1: The first exam is hard?
Does this pattern occur because this first exam is a hard exam? I don't see this as a difficult exam. Granted, I teach the course, so you can judge for yourself.

Reason 2: I suck as an instructor (for this course)?
This is my sixth or seventh year teaching the course, and I feel certain that I have improved in my own knowledge of the content, my ability to concoct entertaining and memorable examples, workshops and rules of thumb, and generally to feel out what concepts and applications give (studious) students difficulty. I have also built up a substantial array of student-tested assignments. So while I have looked long and hard at my teaching and coaching skills, I don't think that is the primary variable at work here. Granted, feelings aren't as conclusive as variables with high construct reliability, but given that we have no real data to measure this, you'll just have to go with my assumption.

Reasons 3: Students were not exposed to enough worked out examples?
The most difficult calculation involved finding the first, second and third quartiles to construct a box-and-whisker diagram. True, I do focus on applications (i.e., using these techniques to answer a question), but then the lecture scenarios also included these kinds of applications. In lab, they also worked through similar kinds of problems. Indeed, I even posted last year's exam, which was conceptually very similar to this year's exam. And in case students weren't sure as to what the correct kinds of answers would be for that exam, I also posted the old answer key. There was much conceptual similarity, and so long as students could generalize beyond the cases they observed to new cases, there were no surprises on this exam.

Heck, even the TA had anecdotes from students saying they thought the exam was fair, which I interpret to mean that there were no tricksy questions.

Reason 4: Failing students have poor meta-cognitive skills?
So maybe the reason for the dichotomy or bifurcation in marks is that some of my students have poorly-developed meta-cognitive skills. This could mean they and are unable to identify when and where they have holes in their knowledge, and think they have mastered the material. They are post-children for the Dunning-Kruger effect: they don't know what they don't know, and overestimate what they do know.

If this is the case, then the obvious solution is to help them improve their meta-cognitive skills.

The meta-cognitive carrot
In an effort to increase students' meta-cognition, I assigned a post-exam survey worth 2 extra credit marks on this exam. All-in-all, this would alter the balance of marks over the course of the semester by 0.2%, a fairly trifling amount. However, given that it is an easy to earn 0.2% increase, I'm hoping that students will complete the 12 multiple choice or fill-in-the-blank questions quickly.

I am hoping that it also provides me better insight into what students actually did to prepare for the exam and how they felt this would help them.

Post-exam quiz questions
These questions collect information on the following:

  • Did they know that the course website contains mini-lectures that deal with course content? How many of them did they view?
  • Do they attend the labs? Do they work through the practice labs?
  • Do they go to 'lecture'? Do they work through the lecture scenarios before they come to the lecture hall? If so, how many of those did they work through? All of them? Less than half?
  • When they attend the lecture hall, do they take at least a page of handwritten notes?
  • Did they know that a copy of an old exam was on the course website? If so, did they work through it?
  • How did they feel about their level of preparation before they entered the exam room? How did they feel about their level of preparation after they completed the exam?
  • What mark do they think they had scored after they had completed the exam?
What do I hope to achieve?
For this iteration of the post-test survey, I simply want to collect the data and compare it to the students' actual performance. What patterns emerge? Are they consistent with what evidence-based learning research, or barring that, my intuition, suggests should be the case? I'll post my findings in the next installment of Instructional Kaizen.

So what are the Instructional Kaizen techniques discussed above?
Three techniques stand out. First, there is the general interest in improving pedagogy by continuous tinkering. Second, there is the identification of variables of possible interest. Third, there is a focused intervention --- in this case, data collection with the post-test survey --- in hopes of identifying patterns that might be amenable to intervention. Though in later installments, we might decide that this is the precursor to an informed intervention.

No comments:

Post a Comment

Keep postings clean and productive. Be specific. Remember, helpful feedback includes concrete suggestions on how to improve.