Kaizen in the classroom...
is written by me, Jeff Boggs, an Associate Professor of Geography at Brock University in St. Catharines, Ontario, Canada. I have a keen interest in applying evidence-based research findings in higher education pedagogy to my own courses. I also have a strong interest in wider innovations or tragedies of possible importance to Ontario's English-language higher-education landscape. From time to time, I will post on these matters here, especially as they relate to my courses.
Monday, May 27, 2024
Too much marking?
For this coming semester, I resolve to provide students with fewer assignments. This is as much for my sanity as for their learning.
I suspect that as the number of assignments increase, so, too, does the average student's interest in the assignment decrease, all else being equal.
I am seriously considering abandoning the use of project-based learning, as well.
[I just found this incomplete unpublished entry from ca. April 2018. Posting it for my memory, as I still have these internal debates. ]
Thursday, December 08, 2016
The tradeoff between providing useful feedback and timely feedback.
Maybe lessons I keep learning are lessons I haven't really learned...
As the semester winds down, I've been marking group proposals. Actually, I've just been providing comments on them. In the hope that students will improve upon them. The proper marking is next week. When I will see if students incorporated my feedback. Or not.
However, I also know that I can give too much feedback. And that feedback delivered too slowly means that students have less time to incorporate it or respond to it.
So why do I keep promising feedback that then takes too long to give?
Clearly I am not learning from my mistakes. This bodes poorly for my own metacognitive skills.
Sunday, November 27, 2016
Has it been four years? The marking blues, or, the cost of too much feedback.
Has it been five four years?
Apparently I started this blog My most recent conundrum: the marking blues
My current conundrum concerns (alliterative, no?) my realization that I hate marking, even though it is important. I don't mind making rubrics (though sometimes I resort to a checklist). I think clear feedback is useful, at least when students read it. But marking grinds down my mental reserves like the trench warfare of WWI left a dent in France's population pyramid that took decades to work out. Ok, maybe marking isn't that bad.I have a tendency to provide too much line-editing. I sometimes feel as if I provide more feedback than students will use (at least before I started requiring second drafts that showed how they incorporated or responded to my feedback). This was in addition to summarizing comments. And a detailed rubric (though not more than could fit on a single double-sided sheet).
One tool I've found to help speed my marking along is to use two highlighters, one green and one pink. The green marker is used to underline things students do well, the pink marker underlines those things students do poorly. I then might also add some margin notes. This seems to help me mark more quickly. One way it helps me is that I now routinely have a big fat highlighter in my paw instead of a pen or pencil. Mechanically, this means it is well-nigh impossible to automatically scrawl comments in the margins, or line edit, without first replacing the highlighter with a more finely-nibbed writing instrument. And that little act, I surmise, is often too much effort.
The takeaway
If when marking, you discover you are a compulsive line-editor and comment-leaver, using two colors of highlighter might reduce your marking time. This still leaves you time to also provide summarizing comments.Monday, October 01, 2012
Instructional kaizen toolbox: Post-test surveys as metacognitive tool
Recently I gave an exam. The first of the semester in my quantitative methodology and research design course for geographers. It follows a general flipped course format, though modified by the fact that the lecture time occurs in a lecture hall with chairs bolted to the floor, facing forward. In theory 75 students are registered, but in practice around 50 show up. We are in the lecture hall three days a week for 50 minutes each time, plus there is a two-hour lab weekly. My TA is phenomenal, and knows the material cold.
Let's examine this case in an attempt to flesh out one or more Instructional Kaizen techniques.
I'm almost done marking the exams, but so far some general patterns have emerged.
Learning outcomes
Some students appear to have done well. Of the names I recognize, these students answer questions every time we meet or almost every time. Some also send me emails or post on the course website's forum.
Some students appear not to have done so well. I recognize fewer of their names. Unless they are repeating the course, in which case I then recall the names from last year.
This pattern is not shocking. In fact, its painfully common. It leaves me asking what, if anything, can I do to alter this empirical regularity. And by empirical regularity, I mean it happened twice using essentially the same material. It happened last year. It happened this year. If I teach this course next year and I don't change anything, I predict it would happen again. Why? Let's exam the reasons.
Reason 1: The first exam is hard?
Does this pattern occur because this first exam is a hard exam? I don't see this as a difficult exam. Granted, I teach the course, so you can judge for yourself.
Reason 2: I suck as an instructor (for this course)?
This is my sixth or seventh year teaching the course, and I feel certain that I have improved in my own knowledge of the content, my ability to concoct entertaining and memorable examples, workshops and rules of thumb, and generally to feel out what concepts and applications give (studious) students difficulty. I have also built up a substantial array of student-tested assignments. So while I have looked long and hard at my teaching and coaching skills, I don't think that is the primary variable at work here. Granted, feelings aren't as conclusive as variables with high construct reliability, but given that we have no real data to measure this, you'll just have to go with my assumption.
Reasons 3: Students were not exposed to enough worked out examples?
The most difficult calculation involved finding the first, second and third quartiles to construct a box-and-whisker diagram. True, I do focus on applications (i.e., using these techniques to answer a question), but then the lecture scenarios also included these kinds of applications. In lab, they also worked through similar kinds of problems. Indeed, I even posted last year's exam, which was conceptually very similar to this year's exam. And in case students weren't sure as to what the correct kinds of answers would be for that exam, I also posted the old answer key. There was much conceptual similarity, and so long as students could generalize beyond the cases they observed to new cases, there were no surprises on this exam.
Heck, even the TA had anecdotes from students saying they thought the exam was fair, which I interpret to mean that there were no tricksy questions.
Reason 4: Failing students have poor meta-cognitive skills?
So maybe the reason for the dichotomy or bifurcation in marks is that some of my students have poorly-developed meta-cognitive skills. This could mean they and are unable to identify when and where they have holes in their knowledge, and think they have mastered the material. They are post-children for the Dunning-Kruger effect: they don't know what they don't know, and overestimate what they do know.
If this is the case, then the obvious solution is to help them improve their meta-cognitive skills.
The meta-cognitive carrot
In an effort to increase students' meta-cognition, I assigned a post-exam survey worth 2 extra credit marks on this exam. All-in-all, this would alter the balance of marks over the course of the semester by 0.2%, a fairly trifling amount. However, given that it is an easy to earn 0.2% increase, I'm hoping that students will complete the 12 multiple choice or fill-in-the-blank questions quickly.
I am hoping that it also provides me better insight into what students actually did to prepare for the exam and how they felt this would help them.
Post-exam quiz questions
These questions collect information on the following:
For this iteration of the post-test survey, I simply want to collect the data and compare it to the students' actual performance. What patterns emerge? Are they consistent with what evidence-based learning research, or barring that, my intuition, suggests should be the case? I'll post my findings in the next installment of Instructional Kaizen.
So what are the Instructional Kaizen techniques discussed above?
Three techniques stand out. First, there is the general interest in improving pedagogy by continuous tinkering. Second, there is the identification of variables of possible interest. Third, there is a focused intervention --- in this case, data collection with the post-test survey --- in hopes of identifying patterns that might be amenable to intervention. Though in later installments, we might decide that this is the precursor to an informed intervention.
Let's examine this case in an attempt to flesh out one or more Instructional Kaizen techniques.
================================================
I'm almost done marking the exams, but so far some general patterns have emerged.
Learning outcomes
Some students appear to have done well. Of the names I recognize, these students answer questions every time we meet or almost every time. Some also send me emails or post on the course website's forum.
Some students appear not to have done so well. I recognize fewer of their names. Unless they are repeating the course, in which case I then recall the names from last year.
This pattern is not shocking. In fact, its painfully common. It leaves me asking what, if anything, can I do to alter this empirical regularity. And by empirical regularity, I mean it happened twice using essentially the same material. It happened last year. It happened this year. If I teach this course next year and I don't change anything, I predict it would happen again. Why? Let's exam the reasons.
Reason 1: The first exam is hard?
Does this pattern occur because this first exam is a hard exam? I don't see this as a difficult exam. Granted, I teach the course, so you can judge for yourself.
Reason 2: I suck as an instructor (for this course)?
This is my sixth or seventh year teaching the course, and I feel certain that I have improved in my own knowledge of the content, my ability to concoct entertaining and memorable examples, workshops and rules of thumb, and generally to feel out what concepts and applications give (studious) students difficulty. I have also built up a substantial array of student-tested assignments. So while I have looked long and hard at my teaching and coaching skills, I don't think that is the primary variable at work here. Granted, feelings aren't as conclusive as variables with high construct reliability, but given that we have no real data to measure this, you'll just have to go with my assumption.
Reasons 3: Students were not exposed to enough worked out examples?
The most difficult calculation involved finding the first, second and third quartiles to construct a box-and-whisker diagram. True, I do focus on applications (i.e., using these techniques to answer a question), but then the lecture scenarios also included these kinds of applications. In lab, they also worked through similar kinds of problems. Indeed, I even posted last year's exam, which was conceptually very similar to this year's exam. And in case students weren't sure as to what the correct kinds of answers would be for that exam, I also posted the old answer key. There was much conceptual similarity, and so long as students could generalize beyond the cases they observed to new cases, there were no surprises on this exam.
Heck, even the TA had anecdotes from students saying they thought the exam was fair, which I interpret to mean that there were no tricksy questions.
Reason 4: Failing students have poor meta-cognitive skills?
So maybe the reason for the dichotomy or bifurcation in marks is that some of my students have poorly-developed meta-cognitive skills. This could mean they and are unable to identify when and where they have holes in their knowledge, and think they have mastered the material. They are post-children for the Dunning-Kruger effect: they don't know what they don't know, and overestimate what they do know.
If this is the case, then the obvious solution is to help them improve their meta-cognitive skills.
The meta-cognitive carrot
In an effort to increase students' meta-cognition, I assigned a post-exam survey worth 2 extra credit marks on this exam. All-in-all, this would alter the balance of marks over the course of the semester by 0.2%, a fairly trifling amount. However, given that it is an easy to earn 0.2% increase, I'm hoping that students will complete the 12 multiple choice or fill-in-the-blank questions quickly.
I am hoping that it also provides me better insight into what students actually did to prepare for the exam and how they felt this would help them.
Post-exam quiz questions
These questions collect information on the following:
- Did they know that the course website contains mini-lectures that deal with course content? How many of them did they view?
- Do they attend the labs? Do they work through the practice labs?
- Do they go to 'lecture'? Do they work through the lecture scenarios before they come to the lecture hall? If so, how many of those did they work through? All of them? Less than half?
- When they attend the lecture hall, do they take at least a page of handwritten notes?
- Did they know that a copy of an old exam was on the course website? If so, did they work through it?
- How did they feel about their level of preparation before they entered the exam room? How did they feel about their level of preparation after they completed the exam?
- What mark do they think they had scored after they had completed the exam?
For this iteration of the post-test survey, I simply want to collect the data and compare it to the students' actual performance. What patterns emerge? Are they consistent with what evidence-based learning research, or barring that, my intuition, suggests should be the case? I'll post my findings in the next installment of Instructional Kaizen.
So what are the Instructional Kaizen techniques discussed above?
Three techniques stand out. First, there is the general interest in improving pedagogy by continuous tinkering. Second, there is the identification of variables of possible interest. Third, there is a focused intervention --- in this case, data collection with the post-test survey --- in hopes of identifying patterns that might be amenable to intervention. Though in later installments, we might decide that this is the precursor to an informed intervention.
Sunday, September 16, 2012
Inaugural posting in which I introduce the genesis of the term Instructional Kaizen
What is kaizen?
Economic geographers from the mid-1980s to the early 1990s, along with many other social scientists interested in really-existing capitalism, examined the workings of the then-ascendant Japanese economy. At the time, Post-war Japan's economic might flowed from its substantial manufacturing base. One of the key institutional features of Japanese manufacturing systems was an internalized routine of continuous product and process upgrading, often called kaizen. Because this feature was alleged to be the norm in Japanese manufacturing culture, perhaps it is more clear to say it is a culturally-specific commonsensical understanding of how work flows on the shopfloor. Kaizen suggests a constant tinkering and monitoring and revising of a production process or workflow in an effort to always make it that much better.
Why not Tüftler?
In the German-language industrial restructuring and economic geography literature, there was also reference to the Tüftler, which calls up images of the inventor toiling away in the garage or shop, trying to perfect the proverbial widget, or the process for making it. However, Tüftler never achieved widespread use in the English-language literature (I only learned the term while conducting fieldwork in 2000-2002 on the locational dynamics of the German book trade, a decade after I'd already been exposed to the ideas of kaizen, just-in-time production, flexible production, flexible accumulation and post-Fordism). Furthermore, my understanding of the useage of Tüftler doesn't suggest a widespread, collectively held mindset or disposition about how work is done, which is how kaizen is usually described. Instead, Tüftler refers to individuals who operate on their own. While these Tüftler are not unknown, their mindset was not described as ubiquitously held in German manufacturing in the way that the kaizen principle was alleged to be among Japanese workers.
The term Kaizen captures this principle in a single word.
Instructional Kaizen
I see my own teaching practice informed by this principle of kaizen, which leads me to coin the term 'instructional kaizen' both as shorthand and to recall the literature where I first encountered this principle of constant monitoring and improvement. While the evidence-based pedagogical literature doesn't use the term Instructional Kaizen, its findings (and the rigorous process through which the findings are derived) are consistent with a kaizen mindset.
Why Instructional Kaizen and not TüftlerInnenfest?
Instructional Kaizen suggests that there is a community of other instructors who embrace this mindset and possess this disposition, and share their successes, failures and practices to create a commonwealth of instructional knowledge. The image of Der Tüftler or Die Tüftlerin always conjured up a proprietary and guarded (if not downright secretive) lone wolf whose connection to a larger community of work was linked not through collaboration and sharing, but through reverse-engineering. Finally, I am not longer sure where I would stand with the Rechtschreibungsreformen if I made the lovely compound noun, BildingstüftlerInnenfest, which would translate as 'the festival of instructional tinkerers (of both sexes).'
More breadcrumbs
My YouTube account is ProfBoggs, and my long-standing webpage is www.jeffboggs.com . I work here and am affiliated with this.
Subscribe to:
Posts (Atom)