One the worst traps we can fall in as analysts is to ignore the big picture. Lately, I've been guilty of this -- dealing with short term, juicy
topics while eschewing the forest in front of my face.
So I decided to look at 12 years of Ohio Graduation Test data and see how different districts fared each year. The OGT results are listed separately on the Ohio Department of Education
website.
The results blew me away; after more than a decade of test-focused reform, Ohio's achievement gap between its wealthiest and poorest districts has gotten
worse, not better. What now? Well, I'm going to do my part: namely a blog series on exactly that: How do we determine academic success and why
aren't we closing the gap?
The data that inspired this series comes from the OGT data put out by the Ohio Department of Education. I chose OGT data because that's been the least volatile state-administered test. ODE lists the percentage of students who scored advanced, accelerated, proficient, basic and limited for each year's testing. Then, using the state's Performance Index
formula (I didn't include the new "Advanced Plus" category for the calculus. Again, for all you nerds out there.), I was able to crunch those four categories into a single, mini-Performance Index (PI) Score so I could more easily see how districts were improving.
I then looked at the districts' improvement on the raw mini-PI score and how they ranked each year among the 608 districts that could be compared each year. I didn't include the island districts or College Corner.
Then I looked at their typeology. While the typeology numbers and definitions have changed slightly over the years, the typeology tells you the kind of district based on community make up and poverty. Here's the most current typeology
chart:
The typeology make up is interesting in and of itself. For example, you can see that about 2/3 of Ohio's school
districts are in small towns or rural communities. Yet 2/3 of Ohio's school
kids are in suburban and urban districts.
This explains Ohio's struggle with school funding to a great extent. Because ways you can make a formula work for rural districts will likely hurt suburban and urban areas, where most of the kids are.
But I digress.
Again, I used the typeology chart to determine which typeologies tended to score better than others in each year. They I looked to see how they improved (or didn't) between the 03-04 school year and the 14-15 school year.
The results aren't really surprising. The wealthiest categories (3,5 and 6) rated the best. The poorest (categories 1, 4, 7, and 8) did the worst. What
is surprising is this:
The achievement gap between the rich and poor districts is growing more pronounced after a dozen years of test obsession.
For example, on the 2003-2004 Math OGT, category 6 (very wealthy, suburban districts like Ottawa Hills) made up 46.7% of the top 10% mini-PI scores. In 2014-2015, that had jumped about 10 percentage points to 56.3%. Meanwhile, the poorest urban districts (category 7, which includes districts like Euclid and category 8) made up 38.3% of the bottom 10% scoring districts, and 6 of the 8 Big 8 urbans (category 8, which is Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo and Youngstown) scored in the bottom 10%. On the 2014-2015 OGT Math, all the Big 8 urban districts score in the bottom 10% and 56.6% of the bottom 10% of scores come from the state's poorest urban districts.
The same general breakdown and change has occurred on reading scores, though not quite as dramatically. But the disparity still has grown significantly between the wealthiest and poorest districts.
So even though urban districts only make up about 9% of all districts in the state, they make up nearly 60% of the 60
lowest scoring districts in the state. Likewise, the state's wealthiest suburban districts (category 6) account for 7.6% of all Ohio districts, but make up about the same 60% of the 60
highest performing districts.
It is equally telling that in neither 2003-2004, nor in 2014-2015 did a single urban district score in the top 10% on either OGT category. And only 1 wealthy suburban district scored in the bottom 10% of either test in either year (Gahanna on 2003-2004 Reading).
What does all this mean? Well, it appears that, generally speaking, 12 years of test-focused accountability has
grown the achievement gap between the state's wealthiest and poorest districts,
not shrunk it. But I want to ask a different question: Can this disparity
ever shrink?
We've known for
years about the powerful connection between test scores and poverty. And we've tried to mitigate the problem by using value-added scores, or some other statistical pretzel, but the fact remains that the data produced by test scores has as much (if not
more) to do with poverty than classroom performance.
This calls into question our whole test-based accountability scheme. For example, if no Big 8 Urban district scored outside the bottom 10% on this set of OGT scores, is it a failure of the Big 8, or merely a confirmation that the tax and census data showing the extreme poverty in these communities is accurate? And if
that is so, is it fair to hold these districts, buildings, teachers and even communities liable for the district's performance?
And if these scores are measuring poverty rather than quality, should we be opening the doors to more and often poorer performing choice options in these districts?
And if test scores aren't cutting the mustard, what can? And at what cost?
These are all questions I'll be exploring over the next several days as I dig into this series. But I think it's important to recognize that the state and nation's poverty achievement gap, if we keep measuring it through tests, may
never close. We may only get a true assessment of our nation's education system when we stop using subject-based standardized tests to measure achievement.
Monday: Testing the Boundaries
Part II - What Can Outliers Teach Us?