On Knowing, Not Just Saying

Readers of this space know of my obsession with philosophical (really, epistemological) issues. Mostly, I am concerned with separating the wheat from the chaff in ideas – isolating the truly silly, from the potentially true, from the probably true, from the certain. In education, these categories tend to get tossed around together, without much reference to the important question: How Would We Know?

I recently had the pleasure of seeing John Hattie speak at the University of Toronto. To say it was refreshing is an understatement. I’ve been drawn to his work for a while now, and before I found it, was convinced such a project was possible.

(Here’s Hattie giving a similar talk.)

His interest is in measuring the effects of various inputs in the education system – inputs and interventions like team teaching, outdoor education, whole language versus phonetics, and practically every question in pedagogy. His premise: we can determine the effect size of all of the things we do in schools – how well they work. His conclusion: as a profession we should move towards those things that work well (have a large effect size), and away from those that do not.

Simple enough. I’m not able to provide a census of his detractors, but I know lots of people who critique his philosophical assumptions. They say that teaching practice cannot be reduced to such certainties. They argue that to strive to capture the subtle human interactions and nuances in teaching quantitatively is absurd. They sometimes argue that to gather such data is to take aim at low-performing schools and groups within them – they sometimes argue it’s culturally imperialist.

Obviously, they have warm hearts. All of us want students to do well, and most of us root for the disadvantaged. But until we gather reliable data on what works and what does not, we are continuing to impoverish our students. And what better way to ensure fairness in our society than by providing all students with the best possible teaching techniques and the best possible practices? And how else to do that then by measuring in the most precise way possible the effect size of what we do in schools?

Like Hattie, I am hostile to the idea that we are not professionals with something special to give. I reject outright the notion that “all teachers have their own way.” If an old-timer said, “Well, I hit the students; that’s how I get them to learn,” we would be outraged. I don’t see how it’s much different to say we are all equally successful using whatever techniques and approaches we “feel” are right. (Also, if teaching were so much based on whim, we would let absolutely anyone walk in and teach our classes; we do not, evidence we think it matters who teaches and how it is done.)

But perhaps the most satisfying element of the whole thing is the humility its adoption would bring to our field. Teaching suffers from a strange paradox of ego: on the one hand, most teachers feel like imposters, and denigrate the value of their practice; on the other, many teachers act the role of Superteacher, where everything he or she does is magical. What teacher hasn’t bristled in the staff meeting where one of their colleagues bellows, “Well, in my class, students love doing X,” or “I’ve never had that problem in my class…”, or “My students learn best when…”? I always want to ask, “How would we know?” and get a response more fulsome than “Because I’ve been teaching for 19 years, and I just know.”

Measurement projects like Hattie’s sweep all that nonsense away by asking, “What is the effect of our labours?” We can know, within some margin of error, what works and what doesn’t. At least, if we can ever know at all, it will be with an approach like Hattie’s, not our gut feelings and egotistical rantings. And in that kind of regime is comfort – it can depersonalize teaching somewhat, diminish the notion that teaching is a cult of personality, or a kind of mystical alchemy. Some approaches work better than others; let’s determine those, reliably, use them more often than not, and continue to measure the effects of our work – forever.

I dream of a rigorous measurement approach in a school setting, a unit of organization too small to hide in. You would probably need to set up long-term measurement indicators – perhaps a few basic assessments used for years within each grade or course, evaluated with clear rubrics and exemplars, and copies of old student work kept for years – and determine if students are improving by virtue of our efforts, and of course, by how much. (It isn’t enough to merely improve: as Hattie points out, we need to know the magnitude of the improvement. A student in any class will achieve some level of improvement over the year just through maturation.) Other indicators I like: success after high school, student feedback,

We could finally, without reference to our own whims, begin to address genuine “best practices” in our schools. Does team teaching work at our school? Let’s check this year’s assessment and see. Are students benefitting from the Advanced Placement regime? Let’s see how our graduates have done over the past 10 years and compare that number against our graduates from the pre-AP days. Is our program rigorous enough? Let’s gather data from 1st- and 2nd-year students in post-secondary studies. It is for these reasons I’m, in principle, a fan of large-scale assessments like the EQAO.

A proper measurement regime would provide some justification for the claims we make about our schools, our classes, our practice. In fact, it’s the only thing that ever has, and ever will. Without it, merely the loudest voice in the room wins.

Leave a Reply