-Here's a handy little post with some graphs of SAT scores by income. I was actually more interested in the comments than in the post. I guess I know I've been spending too much time in the Ivory Tower when I'm surprised at how surprised some people are by the strong correlation between income and SAT scores. On the other hand, I'm possibly more surprised by how some people seemed to toss aside the differences as though they were nothing. If you look at a chart of SAT percentiles, you'll see that the differences in SAT reading scores between the lowest and highest income brackets translate into the 27th and 70th percentiles on the test. That seems like a huge difference to me -- especially considering that those are the average scores for students coming from families in those income brackets.
-Since economics was one of my undergrad majors, I've had somewhat of an affinity for thinking like an economist. At the same time, the further I progress in my studies the stupider some of the assumptions economists make seem to me. When I started reading this post about thinking like an economist, I thought it was going to be a cute little example of it can be a good thing. And then I got to the end, where he says "When a friend asks me to help them move, I write them a check to pay professional movers instead. It’s just more efficient," and remembered why economists frustrate me sometimes. Helping somebody move isn't really about doing what's most efficient -- it's about helping out your friend(s). Aiming for efficiency can do a lot of good, but sometimes it's ok to just relax and enjoy life.
-The NY Times has a strongly-worded editorial today praising the Race to the Top funding calling it "indefensible" for unions to block tying student achievement to performance ratings for teachers. Part of me agrees. I think it's inevitable that this is going to happen, and the unions should focus on implementing a good system rather than just fighting it. On the other hand, it's also indefensible to imply that tying student achievement to performance ratings is a panacea. For three main reasons:
1.) Only about 1/3 of teachers teach a subject that is on a state test
2.) Given measurement errors, poorly formulated tests, etc. value-added and gain score measures are still highly unreliable. One recent study found a correlation of .2 between teachers scores from year to year (that's really low for you non-mathematicians)
3.) Even if we can measure growth in student achievement accurately, we're not all that sure exactly what it means. So the kid got better at taking the 6th grade state math test . . . and?
-I'm wondering exactly what bar we need to set before we declare a policy a success. Martin West says the results of a study on the NYC principals academy "suggests [the program] is yielding positive results." FYI, the study found a gain of .06 SD in math, and no gain in English test scores for principals of schools who graduated from the academy. To me, that seems utterly meaningless -- which means that we should evaluate the principals academy on some other grounds. Especially considering all the methodological problems involved with evaluating the program.
1 comment:
There are a lot of interesting links and comments on the income-SAT relationship at:
http://www.marginalrevolution.com/marginalrevolution/2009/08/the-inheritance-of-education.html
And a great story on "omitted variables" that you wouldn't even think exist (beginning "Almost thirty years ago a study was published"):
http://www.proteinpower.com/drmike/statins/more-statin-madness/#more-2656
Post a Comment