Showing posts with label Performance Index. Show all posts
Showing posts with label Performance Index. Show all posts

Monday, October 9, 2017

Ohio's School Districts Outperforming Charter Schools at Historically High Rate on Performance Index Score

Under Ohio law,  the state's performance index score is the primary determinant of whether a charter school will open in your community. The score is a weighted measure of student standardized test scores -- scores notoriously linked to student demographics.

The state's scores have dropped overall during the last several years as new tests and standards have forced schools and students to adjust their learning. However, what is clear from an examination of Ohio's historic performance index results is that Ohio's public school districts have been much better at adjusting to the changes than Ohio's charter schools.

In fact, the last two school years -- while representing the two lowest median performance index scores for school districts -- represent the two largest relative score differentials between districts and charters ever recorded.

What's that mean? It means that while Ohio's school districts have dropped, charters have dropped more, making the score disparity between the two larger than ever. The 2015-2016 school year saw districts outperform charters by nearly 50 percent. Last year, it was closer to 45 percent.

For several years, the gap had been narrowing -- especially after the state effectively removed the 90 worst performing charters from qualifying for the performance index score after the 2011-2012 school year.

However, while Ohio's school districts saw a quick, two-year 13 percent drop in performance index scores, charters saw a 27 percent drop during that period -- more than double the relative dip.

Again, the reason I'm harping on performance index scores, though I agree with my charter school advocate friends that the scores are too closely tied to demographics, is because performance index scores are the primary reason charter schools can open in school districts. If school districts score in the bottom 5 percent of districts on performance index, charters can come to town.

However, it is abundantly clear that in the vast majority of cases, those charters will produce performance index scores that are often far worse than the districts whose scores are so troubling to lawmakers they allow competition to ostensibly improve the community's educational options.

If performance index is the measure by which we are determining this need for district competition, what does it say that the alleged competitors score so much worse?

Suffice it to say I get why charter school advocates are now trying to move away from proficiency based accountability regimes. Call me cynical if you wish, but I wonder how much of this move by charter advocates is about the fact that charters overall just perform more poorly than their district competition?

The Quiet Importance of Ohio's Performance Index Score

Many in the charter school advocacy community have been pushing to have the state only judge charter schools by their so-called "value-added" scores -- the amount of student growth shown during the year on the state's battery of standardized tests. Among the advantages these advocates stress is how the value added scores tend to be less influenced by demographics than straight proficiency scores.

Value added measures have real limitations as well, including many critics who claim value added scores are equally problematic for accountability purposes as proficiency scores.

However, proficiency scores continue to be among the most important single state report card indicators. Why? Because according to state law, if a school district scores in the bottom 5 percent of performance index scores, charter schools can come to town. Performance index is calculated by weighting scores depending on how high or low students score on state proficiency tests.

So despite charter school advocates' admission that what testing critics have said for years -- that proficiency scores are closely tied to demographics -- and their insistence that value-added scores be the primary determinant of a school or district's success, these advocates have been silent on this fundamental provision of state law that determines whether a school district will have publicly funded, privately run schools competing for students and dollars.

Here's one reason why they may remain silent on this issue: Charter schools, which are meant to provide options to parents in districts with poor proficiency scores, have even worse performance index scores than the bottom 5 percent of districts.

According to the current list of "challenged school districts" put out by the Ohio Department of Education (the list was frozen after the 2013-2014 school year as Ohio transitions to its A-F report card), there are 24 districts that qualified then for charter schools under the performance index performance criterion. There are an additional 15 that qualify because they were part of the original pilot program (Lucas County schools) or are a big 8 urban district (Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo and Youngstown).

The median performance index score for those 24 "challenged school districts" was 83.058 (out of 120 possible) in 2013-2014. Do you know how many charter schools outperformed that score on the last report card? If you guessed 23, you'd be right.

That's 23 out of 265 charters that had performance index scores recorded by the state -- a staggeringly low 9 percent of Ohio charter schools. That does not include the 90 or so dropout recovery schools, which do not receive performance index scores, but that routinely rated as the worst performing schools in the state prior to receiving their far more lenient report card. For example, in the last year dropout schools received performance index scores (2010-2011), they scored 10 points lower than their non-dropout recovery counterparts.

The amazing thing is there are fewer charters that currently outperform Ohio's lowest performing school districts than there are currently "challenged school districts" due to their poor proficiency scores!

More than 94 percent of money being spent on Ohio charter schools with performance index scores goes to charters currently with worse performance index scores than the school districts whose scores were so bad state lawmakers called them "challenged" enough to allow charter schools to come in to offer "better" options ...
6 percent of the time!

What do you suppose will be state lawmakers' response to this conundrum -- the alleged solution is even worse performing than the problem it was meant to solve? If you guessed nothing, you're cynical. But you're probably right.

This is also why I continue to compare district performance with charter school performance.

Sorry guys.

School districts' performance on the very measures charter advocates criticize the state for using to judge their favorite schools' performance is being used by the state to determine if charters should open in certain communities.

Yet that same state metric would designate more than 9 of 10 Ohio charter schools a "challenged school district" in desperate need of competition.

For those of you who would charge this comparison is unfair because performance index scores are overall lower today than they were in 2013-2014, applying the same performance index standard for a "challenged school district" today would result in nearly 6 in 10 charters scoring worse.

Again, the designation of a "challenged school district" remains how that school scored in 2013-2014.
But regardless of which criterion you use, charters overall just don't pass muster.

Friday, August 19, 2016

Testing the Boundaries: A Series on Ohio (and the Nation's) Achievement Gap. Part I: After 12 years, Ohio's Performance Disparity Between Rich and Poor Districts has Grown Worse. Now What?

One the worst traps we can fall in as analysts is to ignore the big picture. Lately, I've been guilty of this -- dealing with short term, juicy topics while eschewing the forest in front of my face.

So I decided to look at 12 years of Ohio Graduation Test data and see how different districts fared each year. The OGT results are listed separately on the Ohio Department of Education website.

The results blew me away; after more than a decade of test-focused reform, Ohio's achievement gap between its wealthiest and poorest districts has gotten worse, not better. What now? Well, I'm going to do my part: namely a blog series on exactly that: How do we determine academic success and why aren't we closing the gap?

The data that inspired this series comes from the OGT data put out by the Ohio Department of Education. I chose OGT data because that's been the least volatile state-administered test. ODE lists the percentage of students who scored advanced, accelerated, proficient, basic and limited for each year's testing. Then, using the state's Performance Index formula (I didn't include the new "Advanced Plus" category for the calculus. Again, for all you nerds out there.), I was able to crunch those four categories into a single, mini-Performance Index (PI) Score so I could more easily see how districts were improving.

I then looked at the districts' improvement on the raw mini-PI score and how they ranked each year among the 608 districts that could be compared each year. I didn't include the island districts or College Corner.

Then I looked at their typeology. While the typeology numbers and definitions have changed slightly over the years, the typeology tells you the kind of district based on community make up and poverty. Here's the most current typeology chart:


The typeology make up is interesting in and of itself. For example, you can see that about 2/3 of Ohio's school districts are in small towns or rural communities. Yet 2/3 of Ohio's school kids are in suburban and urban districts.

This explains Ohio's struggle with school funding to a great extent. Because ways you can make a formula work for rural districts will likely hurt suburban and urban areas, where most of the kids are.

But I digress.

Again, I used the typeology chart to determine which typeologies tended to score better than others in each year. They I looked to see how they improved (or didn't) between the 03-04 school year and the 14-15 school year.

The results aren't really surprising. The wealthiest categories (3,5 and 6) rated the best. The poorest (categories 1, 4, 7, and 8) did the worst. What is surprising is this: The achievement gap between the rich and poor districts is growing more pronounced after a dozen years of test obsession.

For example, on the 2003-2004 Math OGT, category 6 (very wealthy, suburban districts like Ottawa Hills) made up 46.7% of the top 10% mini-PI scores. In 2014-2015, that had jumped about 10 percentage points to 56.3%. Meanwhile, the poorest urban districts (category 7, which includes districts like Euclid and category 8) made up 38.3% of the bottom 10% scoring districts, and 6 of the 8 Big 8 urbans (category 8, which is Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo and Youngstown) scored in the bottom 10%. On the 2014-2015 OGT Math, all the Big 8 urban districts score in the bottom 10% and 56.6% of the bottom 10% of scores come from the state's poorest urban districts.






The same general breakdown and change has occurred on reading scores, though not quite as dramatically. But the disparity still has grown significantly between the wealthiest and poorest districts.

So even though urban districts only make up about 9% of all districts in the state, they make up nearly 60% of the 60 lowest scoring districts in the state. Likewise, the state's wealthiest suburban districts (category 6) account for 7.6% of all Ohio districts, but make up about the same 60% of the 60 highest performing districts.

It is equally telling that in neither 2003-2004, nor in 2014-2015 did a single urban district score in the top 10% on either OGT category. And only 1 wealthy suburban district scored in the bottom 10% of either test in either year (Gahanna on 2003-2004 Reading).

What does all this mean? Well, it appears that, generally speaking, 12 years of test-focused accountability has grown the achievement gap between the state's wealthiest and poorest districts, not shrunk it. But I want to ask a different question: Can this disparity ever shrink?

We've known for years about the powerful connection between test scores and poverty. And we've tried to mitigate the problem by using value-added scores, or some other statistical pretzel, but the fact remains that the data produced by test scores has as much (if not more) to do with poverty than classroom performance. 

This calls into question our whole test-based accountability scheme. For example, if no Big 8 Urban district scored outside the bottom 10% on this set of OGT scores, is it a failure of the Big 8, or merely a confirmation that the tax and census data showing the extreme poverty in these communities is accurate? And if that is so, is it fair to hold these districts, buildings, teachers and even communities liable for the district's performance?

And if these scores are measuring poverty rather than quality, should we be opening the doors to more and often poorer performing choice options in these districts?

And if test scores aren't cutting the mustard, what can? And at what cost?

These are all questions I'll be exploring over the next several days as I dig into this series. But I think it's important to recognize that the state and nation's poverty achievement gap, if we keep measuring it through tests, may never close. We may only get a true assessment of our nation's education system when we stop using subject-based standardized tests to measure achievement.

Monday: Testing the Boundaries
Part II - What Can Outliers Teach Us?