August 19, 2017

Follow-up on vouchers

Some readers have responded to my post on school quality, pointing me to a “meta-analysis” lauded by the Foundation of Economic Education by M. Danish Shakeel. This “meta-analysis” claimed that vouchers, in general, produce a 0.25 standard deviation benefit on reading and a 0.15 standard deviation benefit on math.

The problem with this is that studies from Colombia and India constituted 65.94% of the reading score weight, and 51.63% of the math weight, with US scores making up the remainder:

The next problem, and why I put “meta-analysis” in quotes is because a single study from NYC made up 55.77% of the weight for the US analysis. This program began in second grade, and actually a follow-up was done which looked at the students’ scores at the end of high school. Those results are available here:

As it happened, apparently the vouchers had benefits at younger ages, but by the senior year they had no or negative effects on test scores. So over half of the study is negated. Even if there was a benefit in 5th grade, there was none by the end of high school, so that guts over half of the supposed impact of this paper.

But there are other studies Shakeel used that I know about. For example, Shakeel claims a big positive effect from the Milwaukee voucher study, which if you have basic score data you know is definitely wrong:

Grade / Subject Voucher 06 Non-Voucher 06 Voucher 10 Non-Voucher 10
7 – Reading 432.2 435.3 492.2 485.4
8 – Reading 446.5 436.9 505.1 486.1
10 – Reading 458 472.9 493.5 492
7 – Math 388.2 395.7 501.6 500
8 – Math 426.3 424.4 504.2 493.3
10 – Math 462.9 478.7 515.5 524.2

Another study they used which they claimed showed a big positive causal effect for vouchers was the Washington Opportunity Scholarship Program. But again, if you know the scores, you know that there was no benefit from the vouchers:

Math Reading
Voucher 641 645.92
Applicant 643.36 645.24

Another study Shakeel used which I know about was the Charlotte study. This study done in North Carolina had no control group. The didn’t compare applicants who got vouchers to applicants who did not get vouchers. No, they had the people who got the vouchers, and then created a “comparison group” made up of people who didn’t apply for the vouchers, controlling for gender and SES. Which is to say they had no control group.

“Control for SES” only means anything to people who already think that SES-like things (like variance in supposed school quality) matter in the first place!

Worth noting is that the voucher group was 12% white, while the comparison group was only 6% white. So they didn’t control for race even. Oh but they controlled for gender! Whoop-de-doo!

The authors themselves admit the study is worthless, saying,

“Though analysis found significant differences between the CSF-C scholarship students and comparison group, this does not mean that participation in CSF-C caused those differences. Other factors, such as parent participation in education, may have been the cause of the differences in outcomes.”

That’s four of the six studies used by Shakeel. The other two make up 11.74% of the US weighting, and he probably got those wrong as well.

So we’re not looking at 0.27 and 0.15 standard deviations improvement on reading and math from vouchers in the United States. We’re looking at around zero for both.

And increasingly I’m becoming closed-minded and intolerant on this issue; I’m tired of authors playing games, not reporting the basic results of the experiment (comparing a proper control group to a treatment group) and instead engaging in all sorts of exotic analyses, and then bowtie lolbergs sending me these “meta-analyses” which report Milwaukee showing a big positive impact for vouchers in addition to not reporting basic data but instead reporting some claimed effect size the author got out of his whatever.

Facebook Comments
  • Jared Huggins

    Is it okay if I act like a bowtied lolberg and oppose vouchers because Hans Hoppe told me to?