Why Grad School is Kind of Like Lego

Worst Idea LegoOne thing that drives me nuts about higher education is that it provides no assurance that people will learn to think. It is remarkably easy to acquire vocabulary and ideas, and then unwittingly abuse them because you don’t actually understand the key ideas you are trying to work with.

So this got me thinking, that grad school is kind of like building with Lego. I always sucked at Lego. I was playing with my friend’s kid a few weeks ago, and looked at the bazillion different blocks on the floor around me, and thought “I got nuthin.”

“You could follow the instructions,” my young friend suggested helpfully.

And then I thought of The Lego Movie. The neat thing about the movie is its premise on the well understood fact that the real fun and creativity of Lego is going off script and making your own stuff. Unless, like me, you really suck, and then you just make things that aren’t really things: Continue reading

Advertisements

Girls, School Dress-Codes and Slut-Shaming

In America, we see Islamic women all covered up and think, “That poor woman, made to be ashamed of her body!” But is it any less oppressive to convince a woman that her uncovered body is never beautiful enough? Is covering enslavement… or freedom? I want to find out.

Tagline from Lauren Jayne’s blog “The Modesty Experiment

I was just a kid the first time someone wolf-whistled at me. And I’m not gonna lie: it was great. The thing is, I’d been picked on in school my whole childhood, and enough kids had called me “ugly” over time that by junior high, I had come to believe it to be true. So at the tender age of thirteen, already with more to fill out a bra than anything like self-esteem, I discovered in the instance of one catcall that I had sexual power. By 15, I was one of those girls who people thought was older than I was, so my teen years were characterized by fairly regular sexual attention, usually from men rather than boys my own age.

It was flattering at the time. It made me feel special. But when I think back on this period of my life, I feel sad for the young woman who believed she didn’t matter without male attention. Maybe that’s why I get my back up when debates arise about how young women dress, and whether or not they deserve to be subject to the “male gaze” when they show bra straps, or wear short shorts, or what have you.

The topic came up on this recent episode of The Current, on which junior high student Tallie Doyle and her mum were interviewed about Tallie’s efforts to protest her school’s dress code. The story came on the heels of a similar case in which a Quebec student was over a similar challenge. It is remarkable how quickly debates become heated when it comes to the sexualization of girls and young women. Continue reading

Alberta Education’s Task Force on Teacher Excellence Report: There’s Some Serious Cherry-Picking Going On Here

It is hard to anticipate the outcomes of the brewing political war in Alberta over The Education ministry’s release of the “Task Force for Teaching Excellence.” Nor can I begin to address the array and complexity of the issues it raises within one short blog. For the moment, I want to use a small segment of the report to highlight the partisan nature of the “teacher excellence” debate. The segment in question may be absorbed without too much question by a reader, which is why I wish to draw attention to it. Namely, it is the report’s citation of two US studies correlating student achievement with teacher effectiveness.[1] What is so disturbing about this aspect of the report (and it is not at all the only disturbing aspect) is that the inclusion of a couple of sexy graphs gives the whole thing a roundly undeserved air of scientific rigour.

Few would argue that there is a link between teacher effectiveness and student learning, and few would disagree that this link is of central importance. Measuring the link is another matter entirely – a matter so complex as to warrant reams – and I mean reams – of academic research focusing on the challenges of such measurement.

“Value Added” Measures of Teachers: A Taste of Methodology Concerns

“Methodology” describes researchers’ efforts to come up with the best ways to do their work. In addition to doing research, researchers are always debating the accuracy and reliability of the methods used to collect data, represent it, and draw conclusions from it. Such debates occur in both quantitative (statistical) research, and qualitative research. An even cursory search of research databases yields some of the methodological concerns around measuring the impacts of teacher effectiveness on student learning.[2] Here, I’ll implore you to read just until your eyes glass over (it won’t take long) and stay with me by jumping down to the point of this very boring sample of material I’ve pulled from studies concerning the measure of a teacher’s “value added”:

  • Value-added models have been widely used to assess the contributions of individual teachers and schools to students’ academic growth based on longitudinal student achievement outcomes. There is concern, however, that ignoring the presence of missing values, which are common in longitudinal studies, can bias teachers’ value-added scores.
  • Value-added approaches to teacher evaluation have many problems. Chief among them is the commonly found class-to-class and year-to-year unreliability in the scores obtained.
  • In this article, the authors provide a methodological critique of the current standard of value-added modeling forwarded in educational policy contexts as a means of measuring teacher effectiveness. An alternative statistical methodology, propensity score matching, [would] allows estimation of how well a teacher performs relative to teachers assigned comparable classes of students.
  • Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring differences across programs is difficult.
  • Empirically, we reject nearly all assumptions underlying value-added models.

I want to make it clear that I did not spend hours picking out snippets to support my position. I am not a statistician, and I cannot even comment on the validity of the studies just sampled. My point in including the above segments from research papers is to illustrate just how complex this measurement problem is. Which should lead one – anyone, statistically inclined or not – to challenge the authority of a report that cites exactly two quantitative studies on the matter and declares “problem solved.”

It’s Not “About the Kids”

In media interviews accompanying the release of this report, Task Force Chairperson Glen Feltham declared that “the interest of the student was paramount – the child came first.” Alberta Education Minister Jeff Johnson echoed, “If we truly want to do what’s best for kids and students, we’ve got to have the guts to have these conversations.” The thing is, saying you want what’s “best for kids” is like saying you like kittens, puppies and ice-cream. Who doesn’t? The Alberta Teachers’ Association backs its position with the same language.

Really, then, who isn’t in it “for the kids?” The Task Force for Teacher Excellence report is “about the kids,” sort of. But it’s much more about a high stakes ideological battle for the hearts and minds of Alberta’s electorate. And here is where it’s important to note that the two studies cited in the report are not contextualized politically any more than they are by academic research. The cited studies come out of the United States: a country so rife with partisanship as to warrant skepticism when it comes to almost any public policy research it produces. It is about the last place we should be looking to for education research, and it is certainly the last place on which we should be modelling public policy debates.

There is little doubt that there are a few (very few) Alberta teachers who ought to be put out to pasture. But this report isn’t any more about teacher excellence than it is “about the kids.” It’s about a political battle represented chiefly by two organizations – Alberta’s Ministry of Education and the Alberta Teachers’ Association – and reflecting two very different perspectives on the extent to which education ought to remain public, or move toward the PC government’s preferred vision of increasing privatization. And it is endlessly frustrating to see research “cherry-picked” on ideological grounds rather than assessed on its own merits. I call Data Abuse. When an entire policy platform is built around the premise that there is a solid causal link between teacher “excellence” (however that’s defined, but that’s a whole other problem) and students’ learning, we ought to have some confidence that this link is sound. I’m not seeing it.

 

[1] Chetty, R., Friedman, J., & Rockoff, J. (2011). The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood. Working paper 17699, National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w17699.pdf. The lengthy report concludes with (sensible) caution about the application of its findings to policies impacting teacher pay, assessment, and retention.

Sanders, W. & Rivers, J. (1996). Cumulative and residual effects of teachers on future student academic achievement. University of Tennessee Value-Added Research and Assessment Center. Retrieved from http://www.cgp.upenn.edu/pdf/Sanders_Rivers-TVASS_teacher%20effects.pdf

[2] Here I jumped into the University of Alberta’s subscription to ProQuest, which aggregates peer-reviewed research from many different academic journals and disciplines. The search terms “value added” and “teachers” yielded 562 hits, and I skimmed the abstracts (the article descriptions) for the first two dozen of these hits.

Trying to Get Beyond Lynden Dorval Hero Worship in the “No Zeroes” Debate

Skip my class and don’t hand stuff in will ya?

Looks like Lynden Dorval won’t be back in the classroom next week. Dorval, a veteran Physics teacher at Edmonton’s Ross Shep High School, was suspended last school year for defying his school’s “no zeroes” grading policy. The case has received remarkable media attention, and highlighted yawning gaps between education “experts” and the “common sense” of the public[1].

When I completed my BEd degree in 2001, my program included a required assessment course, which I found interesting and useful. I was very surprised at the time to learn that assessment and evaluation — critical, core skills for teachers – had only become required curriculum in my teacher education program a few years before I had taken it. Without consistent professional learning in this area, it may well be a case of teachers not knowing what they don’t know. And indeed, it turns out that there’s lots wrong with traditional grading practices. The development of teachers’ knowledge and sophistication with respect to assessment and evaluation practices over the past couple of decades has been remarkable [2].

Media Accounts of “No Zeroes” Are Terrible, Okay?

It is not difficult to make the case that assessment and evaluation ought to be of central interest to education’s policy-makers. To this end, the growth of interest in and research related to “no zeroes” is a good thing. Or at least it might be if it were actually being brought to public engagement with any degree of intelligence. Continue reading