Part 1: Background Information
According to Eduwonk, results from the Nashville incentive pay experiment are due to be released soon. I've been meaning for a while now to write up some background information on the experiment so that we have some context when the results are released, so this seems like as good a time as any.
The National Center on Performance Incentives was started in 2006 with a 5 year, $10 million grant received from the Department of Education's Institute for Education Sciences. The center is housed at Vanderbilt University's Peabody College and run in conjunction with various partners, including the RAND Corporation and the University of Missouri. Peabody's Matthew Springer and James Guthrie (now of the George W. Bush Institute for Public Policy) are the directors, and the center is staffed by people from a range of institutions across the country (full list). The funding was to cover two experiments plus other related costs. The first experiment was conducted in Nashville from 2006-09 and was dubbed the Project on INcentives in Teaching (POINT).
The center started at Vanderbilt the same time that I did, and I worked there during my first year (2006-07) to earn my keep around here. I haven't been involved with the center since then and have no information on what the results are.
The original experiment design was to encompass 200 middle school math teachers in the Metropolitan Nashville Public Schools -- 100 in the control group and 100 in the treatment group. Teachers in the treatment group were eligible for bonuses of up to $15,000 for each of three consecutive school years. Each teacher received $750 every year for participating as long as they completed all the required surveys, interviews, etc. Teachers were recruited into the experiment in the fall of 2006, not long after the school year had begun.
Bonuses were based on student gain scores* (not quite the same as value-added, see technical note at end) on the Tennessee state test (TCAP). Unlike virtually every state, TN's assement is system is vertically scaled, meaning that scores can be compared across years on the same scale (a score of, say, 250 in 7th grade means the same thing as a score of 250 in 6th grade). This means that a student who goes from 240 to 260 from 6th to 7th grade gained 20 points. Meanwhile, researchers looked at the years preceding the experiment to determine the average growth of students at each level. Taking the previous example, let's assume that the average TN 6th grader scoring a 240 on the state test then scores 255 next year. This would mean that a student who scored 260 was 5 points above average. For that, a teacher would receive a score of +5, and each student the teacher taught would be scored similarly. The average score for a student with teacher x would be calculated. The purpose of calculating scores this way was to strike a balance between statistical rigor and transparency/ease of communication. The result is a calculation that's not quite as rigorous as a value-added score, but a lot easier for teachers to understand.
When the teacher's final score has been calculated, it's then compared to the historic average for middle school math teachers in Nashville. If a teacher scores in the 80th percentile, they earn a $5,000 bonus, the 85th percentile earns a $10,000 bonus, and the 95th percentile yields at $15,000 bonus. The targets for the bonuses stay the same the entire three years, so it's possible for every teacher in the treatment group to earn a bonus each year (in other words, they're not competing against each other). It's my understanding that for the first year the bonuses were distributed along with paychecks the following fall, but I don't know what the procedures were the following two years.
The experiment ended in May, 2009 and a large team of researchers have been poring over data from test scores, interviews, surveys, and other sources of information ever since. This means that there is going to be a lot of analysis released at some point in time -- and that it's going to take a while for even the most informed reader to sort through.
technical note: A "gain score" is simply the gain in a student's score from year to year (260 - 240 = a gain of 20 points), while a "value-added score" is an attempt to isolate a teacher's effect on a student's score and might control not only for a student's previous achievement level but also the other teachers he/she has or has had, the school he/she attends, demographic factors, class size, peer effects, and any number of other things. In other words, a gain score is just the raw growth a student exhibits while a value-added score is a more precise estimate of exactly how a specific teacher influenced that growth (though value-added could be computed for schools, states, etc. as well).
No comments:
Post a Comment