Re: RI School Performance
My word won’t be taken on this, but I would love to learn that impressions of Rhode Island’s public education are unjustifiably poor. The ax that I grind is with the amount that we pay for the results that we get, and mathematics proficiency of 50% or less is simply not acceptable in a state that pours so much into its education system. But evidence of improvements would be wonderful.
That is why I’m disappointed that I have to play to type and point out problems with the sunny picture painted by the Learning First Alliance/Rhode Island (LFA/RI) report (PDF) that Marc mentioned yesterday.
Plainly put, all of the bullet points highlighting improvements in proficiency (on which all of the proffered assessments are based) are arguably invalid because of changes in the testing beginning in 2004. From page 2 of the report:
Over the past 10 years, in respect to changes in federal mandates, students in Rhode Island have participated annually in two different statewide assessments. The New Standards Reference Examination (NSRE) was administered yearly to students in grades 4, 8, and 10 during the 1998-2003 academic years. In 2004, the high school grade changed to grade 11. Eleventh graders continued to take the NSRE through spring 2007; the statewide assessment for elementary and middle school students, however, changed after the 2004. Beginning in fall 2006, the New England Common Assessment Program (NECAP) was administered to all students in grades 3-8. High school students will transition to NECAP in the fall of 2007.
In other words, that roughly ten point jump in the percentage of proficient high school students from 2003 to 2004 is likely attributable to the fact that the students took the test a year later. For the lower-aged students, changes in the test itself appear to account for the large improvements that same year.
These complications carry over into the measurement of Regents Commended Schools, because that jump could very well have given the impression of “exceptionally high” performance in 2004. They also carry into the No Child Left Behind data because those, too, are based on progress and targets. Note, for example, the high school chart on page 5: The number of schools in the “moderately or high performing” category jumped in 2004 and has been drifting down ever since.
The somewhat confusing NCLB measure of “targets” is made less reliable (at least as it’s being used in the report) because the final results appear to be inflated. Consider this footnote from page 4 (emphasis added):
The accountability rating was based on the aggregation of 3 years of testing on the statewide assessment in grade 10. The school as a whole and students in each of the eight student groups must meet the targets for the school to make Adequate Yearly Progress as defined by the federal No Child Left Behind Act of 2001. Schools without sufficient numbers of students (e.g., at least 45 students over the 3 years) in any one category were credited with meeting that particular target.
The bottom line is that some progress appears to have been made over the past decade, but there have been sufficient changes that it isn’t as easy to make a fair assessment as it could be. Making matters worse is that many of the folks on whom citizens might want to rely for sober analysis seem more interested in “focusing on the positive.”