Most people who went through a preservice teaching program, generally called Teachers’ College, regarded the experience somewhat skeptically. Most of us wanted something different – less theoretical, more theoretical, more experience in the classroom, different mentor teachers, and the list goes on.
There haven’t been too many systematic reviews of the research on preservice education programs, and many that do exist aren’t very strong. Kate Walsh, writing for the Abell Foundation, writes a scathing review of the effectiveness of teacher education programs. The conclusion: teacher preservice education programs are not valuable as indicators of teacher effectiveness. And verbal ability is more useful in determining who will be the best teachers.
At the heart of this policy is a claim by the education establishment that taking the coursework needed to obtain certification is not only the best, but also the only acceptable means for preparing teachers. This assertion, some claim, is supported by a body of research consisting of 100 to 200 studies. This report reveals in detail the shortcomings found in this research. In fact, the academic research attempting to link teacher certification with student achievement is astonishingly deficient.
To reach this conclusion, we reviewed every published study or paper—along with many unpublished dissertations—cited by prominent national advocates of teacher certification. We found roughly 150 studies, going back 50 years, which explored or purported to explore the relationship between teacher preparation and student achievement.
To our knowledge, there has been no comparable effort by analysts to drill systematically down through these layers of evidence in order to determine what value lies at the core.
The following deficiencies characterize the work advocating teacher certification:
– Research that is seen as helping the case for certification is cited selectively, while research that does not is overlooked.
– The lack of evidence for certification is con- cealed by the practice of padding analyses with multiple references that appear to provide support but, once read, do not.
– Research is cited that is too old to be reliable or retrievable.
– Research that has not been subjected to peer review is given unmerited weight, with particu- lar reliance on unpublished dissertations.
– Instead of using standardized measures of student achievement, advocates design their own assessment measures to prove certification’s value.
– Basic principles of sound statistical analysis, which are taken for granted in other academic disciplines, are violated routinely. Examples include failing to control for such key variables as poverty and prior student achievement; using sample sizes which are too small to allow generalization or reliable statistical inference; and relying on inappropriately aggregated data.
There is a long but very interesting rejoinder discussion surrounding the objections of Linda Darling-Hammond, whose own work is critiqued thoroughly by Walsh. It’s a fascinating look into the world of academic squabbles.