His argument is that linking gains in ELA to an individual teacher, especially a Reading or English teacher, is unfair, be cause reading scores, to Hisrsch, are a proxy measure of background knowledge and building background knowledvge is a school’s job- science and history teachers as much as English teachers.
I will be honest that I am a data guy. I beleive in the power of data and think that for the most part it tells us more than it deceives us. And I am bullish on the careful and strategic use of value-added data to 1) learn a lot about what works 2) partially evaluate teachers 3) evaluate schools and school systems–generally I think the point of accountabiliy for scores should be the school rather than the individual teacher, with school leaders who see the whole picture having the direct say over who teaches for them. But really i love that data and tend to take it at face value.. Hirsch made me question some of my faith in the data on the ELA side. Reading scores are tricky… vey tricky… from a measurement standpoint and his argument linked to a lot of what the data on the data tells us.
- It is true that reading/ELA scores are much less reliable and stable than math scores
- It is true that ELA scores are “sticky” and hard to move. quickly and so hard to link to a specific teacher
- It is true that the average top quintile teacher in math makes 1.5 times as big a gain, per year, as a top quintile teacher in ELA.
- It is true that a larger proportion of what the ELA tests measure is introduced and reinforced outside the classroom (vocabulary and background knowledge, for example) than is the case with math.
- It is true that a given teacher’s scores correlate 50% less to that teacher’s sores in a subsequent year when she is a reading teacher than when she is a math teacher.
- It is true the background knowledge correlates strongly to reading comprehension levels.
Hirsch’s conclusion is that ELA results are a measure of school-wide effectiveness in instilling background knowledge and that value-added scores are unfair to reading teachers because they aren’t primarily responsible for the instilling (or neglecting) of background knowledge. No only do all of a school’s teachers bear the responsiblity there but so do administrators–and not just building level administrators. Because so often the success or failure of a school in building knowledge-base is a curricular decision far above the paygrade of the lowly teacher–would that it were otherwise!–it’s as much top-level decision makers who determine the fundamentals of results as the folks on the front line.
I’m not ready to throw out ELA scores yet. They scores are probably a combination of things, among which background knowledge is one significant component and among which things more within the direct control of English teachers are also a component. But his argument certainly erodes the direct line responsibility between reading teacher and ELA score and reinforces for me that scores are best when they go to and are applied by a good manager (i.e. a principal) who 1) sees the whole picture of an individual teacher’s work and is 2) him or herself deeply and directly responsibile for results over the long term and who 3) makes a decision about what those socres mean and how to fix them–is it a school wide curricular issue? a trainin issue? an issue with a few teaches? etc. All of those are possible explanations for low scores and so, my takeaway from Hirsch’s compelling argument is that the scores are still useful but useful not for enforcing a decision about an individua but informing one…. and that scores shoud inform a lot more than teacher eval.
Anyway, a shout-out to one of the most thought-provoking educators in country for a thought-provoking piece about the potential misuse of a good thing (data).