-Tennessee has figured out a solution (hat tip: Stephen Lentz) to the fact that only about 1/3 of teachers teach tested subjects but that all teachers are supposed to have 35% of their evaluation based on value-added scores . . . all "non-TVAAS" (the state test) teachers will simply have their school's average score used for their evaluation. Problem solved!
-Aaron Pallas continues his critique of the LA value-added kerfuffle, arguing that the LA Times did not do enough to inform its readers about the statistical uncertainty in value-added measurements. He argues that they should've used confidence intervals (something that popped into my head the other day) to more accurately describe the estimate of a teacher's effect on student test scores (they send you a confidence interval with your SAT scores, so why not with a value-added score?) in addition to better describing year-to-year and subect-to-subject variability. This is a follow-up to his incisive critique of the Times' failure to follow normal standards of journalism when verifying the student data.
Jay Mathews has Killian Betlach's take on what it's like to be told to restructure a school.
Roger Garfield, a teacher in DC, provides an insider's view of some of the problems the schools face. The first couple paragraphs brought back a lot of memories for me.
Newark's answer to the Harlem Children's Zone is the Global Village, a group of five schools that have received federal turnaround dollars.
Robert Samuelson says the real key to reform is student motivation. It's a pretty short op-ed, and there's a lot more to it, but I think he raises a valid point. If student motivation doesn't change, why would we expect student learning to change? But I don't think it's quite as strong of a repudiation of other policies as he argues, since better principals, better curricula, better teachers, smaller classes, and so on could conceivably alter student motivation (but if they don't, they probably won't work).
2 comments:
I don't understand Tennessee's solution --or truly lack of an equitable one...each and every teacher of every subject (yes even art, music and pe) do assessments at the beginning, middle and end of the year. Why are these results not used to determine how the students progressed from the start of the year to the end of the year? Seems to me this makes common sense and be a real solution...that way every teacher is evaluated on what they are hired to teach...
Anonymous,
Teachers assess through the year but they are usually assessing different things at different times. A physics teacher might assess forces and one-dimensional motion in the first term and light and electricity in the fourth.
It would be possible to do a common assessment 3 times a year but it would be hard to get students to take it seriously if it isn't counted toward their grade. Yet it would be terribly unfair to have someone's first term physics grade reflect what the student didn't know before he or she took the course!
If the first test doesn't count, there is also a terrible incentive to game the system. I find that my students want me to do well. If I told them, "You will get this same test at the end of the year. The more you have improved, the more I will get paid," many of them would deliberately do poorly so I would get more money.
Post a Comment