My word won't be taken on this, but I would love to learn that impressions of Rhode Island's public education are unjustifiably poor. The ax that I grind is with the amount that we pay for the results that we get, and mathematics proficiency of 50% or less is simply not acceptable in a state that pours so much into its education system. But evidence of improvements would be wonderful.
That is why I'm disappointed that I have to play to type and point out problems with the sunny picture painted by the Learning First Alliance/Rhode Island (LFA/RI) report (PDF) that Marc mentioned yesterday.
Plainly put, all of the bullet points highlighting improvements in proficiency (on which all of the proffered assessments are based) are arguably invalid because of changes in the testing beginning in 2004. From page 2 of the report:
Over the past 10 years, in respect to changes in federal mandates, students in Rhode Island have participated annually in two different statewide assessments. The New Standards Reference Examination (NSRE) was administered yearly to students in grades 4, 8, and 10 during the 1998-2003 academic years. In 2004, the high school grade changed to grade 11. Eleventh graders continued to take the NSRE through spring 2007; the statewide assessment for elementary and middle school students, however, changed after the 2004. Beginning in fall 2006, the New England Common Assessment Program (NECAP) was administered to all students in grades 3-8. High school students will transition to NECAP in the fall of 2007.
In other words, that roughly ten point jump in the percentage of proficient high school students from 2003 to 2004 is likely attributable to the fact that the students took the test a year later. For the lower-aged students, changes in the test itself appear to account for the large improvements that same year.
These complications carry over into the measurement of Regents Commended Schools, because that jump could very well have given the impression of "exceptionally high" performance in 2004. They also carry into the No Child Left Behind data because those, too, are based on progress and targets. Note, for example, the high school chart on page 5: The number of schools in the "moderately or high performing" category jumped in 2004 and has been drifting down ever since.
The somewhat confusing NCLB measure of "targets" is made less reliable (at least as it's being used in the report) because the final results appear to be inflated. Consider this footnote from page 4 (emphasis added):
The accountability rating was based on the aggregation of 3 years of testing on the statewide assessment in grade 10. The school as a whole and students in each of the eight student groups must meet the targets for the school to make Adequate Yearly Progress as defined by the federal No Child Left Behind Act of 2001. Schools without sufficient numbers of students (e.g., at least 45 students over the 3 years) in any one category were credited with meeting that particular target.
The bottom line is that some progress appears to have been made over the past decade, but there have been sufficient changes that it isn't as easy to make a fair assessment as it could be. Making matters worse is that many of the folks on whom citizens might want to rely for sober analysis seem more interested in "focusing on the positive."
"The number of schools in the "moderately or high performing" category jumped in 2004 and has been drifting down ever since."
Uh oh.
"Schools without sufficient numbers of students (e.g., at least 45 students over the 3 years) in any one category were credited with meeting that particular target."
What? That's not acceptable. At most, it should have been unscored.
Posted by: Monique at February 15, 2008 6:27 AMI think few would say all the news is good, but the report shows that not all the news is bad, either.
Regarding the proficiency scores, The 2004-5 changes may have given a bump, but the pre-change (1999-2004) trend is positive for all grades for all subjects, except possibly middle school math.
But yes, there is lots of work to be done. You are right that the flattened scores for high schools a special source of concern.
What? That's not acceptable. At most, it should have been unscored. -Monique
If they scored schools on the percentage of targets reached, this would matter, but they don't. The only thing that matters in determining performance or improvement is how many targets a school misses, not how many or what percentage it makes. Miss one target, and you're "non-performing".
From the point of view of labeling a school as "performing" or "making progress", then, there is no difference between being given credit for a target (say, % of Asian-American students proficient) and not having that target at all.
One effect is that urban schools, where the designated groups (minorities, poverty, kids with IEPs, etc) are bigger, have more targets to hit, and a greater chance of failure. If your school has less than 45 of a group, their scores get blended in with the rest. Even if that group performs poorly, the rest of the kids scores can get them to the target.
Come to think of it, though, 45 is a big number. 30 would provide reliable estimates, I would think.
Posted by: Thomas Schmeling at February 15, 2008 10:29 AMThomas,
I'm not sure what you mean by "blended." The way I read the report, if a school had 44 each of each minority group at every socioeconomic strata and every aptitude category, every single one of those subgroups could be losing ground, but every single one of them would be credited as adequate.
Now, I really don't believe that there are any schools in which this is even close to the case, but if missing in a category or two changes a school's designation, it's a substantial consideration.
Posted by: Justin Katz at February 15, 2008 12:17 PMJustin,
I think I see what you mean. I would want to talk to RIDE to be sure about this, but I strongly believe the statement you quoted, "Schools without sufficient numbers of students (e.g., at least 45 students over the 3 years) in any one category were credited with meeting that particular target" is not technically correct, or at least not clear.
Here’s another footnote from the report: The number of targets schools were required to meet varied depending on the student population demographics. Schools were evaluated only on targets for which they had a sufficient number of students (e.g., at least 45 students in that subgroup).
So, schools are only "credited with" meeting the particular target in the sense that target automatically could not count against them because it didn't exist for that school. They don't actually get a "target met" checkmark, which wouldn't matter anyway, because the report card only counts your failures, not your percent success.
Does that make sense?
If you want an example, first read the one page guide at http://www.eride.ri.gov/reportcard/07/DOCS/qGuide.pdf
Then look at Thompson Middle School in Newport
http://www.eride.ri.gov/reportcard/07/ReportCard.aspx?schCode=21106&schType=2
The school had 25 targets. It hit 24 of them, but it got labeled "insufficient progress" because it missed one by a great deal. That one was disabled students. I would venture that the schools have no business giving a lot of those kids the NECAP test in the first place.
Notice the target scores at the top. They are exactly the same for every school in the state, regardless of the demographics. Notice that overall population exceeded the target scores in both math and English. Then note that each of the subcategories other than white did not actually reach the target score. All of these groups, except the disabled, got a "target met" because they had made sufficient progress toward the target.
So, to your question: " if a school had 44 each of each minority group at every socioeconomic strata and every aptitude category, every single one of those subgroups could be losing ground, but every single one of them would be credited as adequate."?
Yes...almost sort of….not really. There would not be a "target met" checkmark, nor a "target not met" because that group is simply not evaluated separately. Even with the same overall score, if Thompson had less than 45 disabled kids, or less than 45 in the other categories, it would have sailed through with at least "moderately performing" based on its overall average.
For fun, compare North Smithfield Junior High. At http://www.eride.ri.gov/reportcard/07/ReportCard.aspx?schCode=25108&schType=2
Notice that in the scores on the left, the kids there are "all" white, non-disabled, non-economically disadvantaged etc. I somehow doubt that.
I do believe we should track different groups and compare. I understand that if there are too few, you don't get a valid sample. But to then use these scores to create a "grade" as they do? I don't buy it.
How much does one learn by knowing that a particular school did not make "adequate yearly progress". I say, "not enough".
Posted by: Thomas Schmeling at February 15, 2008 1:57 PMIn case I did not make this clear above, the scores of the English-learner, disabled etc. kids in N. Smithfield are just averaged in with the others in the "overall" score. Since the latter is only .1 points below the former, I'll guess there aren't a lot of those kids at that school.
Also, N. Smithfield has 11 targets, as opposed to Thompson's 25
Posted by: Thomas Schmeling at February 15, 2008 2:04 PMI apologize for this, but I just noticed a better example than Thompson is Samuel Slater MS in Pawtucket. EVERY subgroup hit the state target in math AND english, except disabled kids. 24/25 targets.
Bang! 'Insuffient Progress". You are a BAD SCHOOL.
http://www.eride.ri.gov/reportcard/07/ReportCard.aspx?schCode=26106&schType=2
Thomas,
I don't see how any of your further information changes the basic fact of the matter. (Thanks, though for providing a documented explanation of the system, though.)
It appears to be deceptive to say that a school that doesn't have enough blacks, say, to have a target for them just "roll those students" into the overall total. All schools have an overall total. In theory, a school could have up to 308 non-white, non-above-poverty students all failing miserably, and because each group only hit 44 students, it adequate performance among financially stable, educationally normative white students would pass that school.
I think we agree that the scoring is ridiculous. Where we appear to differ is that I bristle at the propaganda value that vested interests (such as your co-blogger Crowley) seek to get out of that bad system.
Thanks to bureaucracy and unions, it is actually more in our educational establishment's interests to come up with nutty proficiency measures than to come up with ways to make actual improvements.
Posted by: Justin Katz at February 15, 2008 8:29 PMJustin,
Your assessment of "the basic fact of the matter" and mine must be quite different.
I have simply tried explain how the scoring system works, and to show how it can, and does, lead to unfairly label a school as underperforming. I hope that was useful to readers.
You say, I think we agree that the scoring is ridiculous. Where we appear to differ is that I bristle at the propaganda value that vested interests (such as your co-blogger Crowley) seek to get out of that bad system.
First, I think that you and I, as taxpayers and parents, have just as much of a "vested" interest in education policy and performance as Pat or anyone else.
Second, (and I hesitate to even respond to this, because I think it is irrelevant to any discussion of the facts) Pat is by no means my 'co-blogger'. RIF has a "diary" feature by which commenters can start their own topic. (I recommend it to you here on AR) The blog-masters can elevate a diary to a post if they think it worthy. Very recently, I've made a couple of "diary" posts on RIF that have been elevated. I take that as a recognition that I provide substance, and am quite honored. That doesn't make me a "blogger" there.
One of those "elevations" was by Matt and the other was, I think, by Alex. I don't think either was by Pat, but I don't really care. I think Pat is right on some things, and not others. (For instance, I think he should apologize for giving the finger to administrators in Portsmouth, or wherever that was). I am happy to agree with him (or anyone) when I think he is right, and disagree when I think he is not.
Finally, I have a serious interest, both professionally and as a citizen, in drawing valid conclusions from data about how our government, including public schools, are performing. I assure you that I bristle at ANYONE who seems interested in trying to turn the data this way and that so as to wring their desired conclusions out of it.
Thomas,
I've no doubt that the scoring system leads to unfair "underperforming" results. But it's also clear to me that it allows deceptive presentation. I don't, for example, think that racial categorization is just, or even moral, in such matters, but as it stands, the presentation is of this large number of targets, which tends to minimize the significance of missing one or two.
We agree that taxpayers and parents have "a 'vested' interest in education policy and performance," but our interest is in the success of education, in the systems improvement. The interests of unions and other established players requires them to put forward a picture in which the system is fundamentally working, but with some unjust restraint (surprisingly, usually the amount of money available for them to spend).
Our interests are better served by a clear-eyed understanding of what's wrong; theirs are better served by an understanding of what's wrong that gives them a central role in the fix.
Posted by: Justin Katz at February 16, 2008 9:20 AM