Mr. Pane conducted a study, financed by the federal Department of Education, of an algebra software program. He found that high school students who used the program …showed gains on their state-standardized math tests that were nearly double the gains of a typical year’s worth of growth using a more traditional high school math curriculum.
Double! Well, that sounds impressive. But, you want to know what exactly these “doubled gains” actually are. Does it mean a gain of 2 points instead of 1 point? Or does it mean 30 points instead of 15?
Note the last step in critically evaluating a study or experiment:
- The source of the research and of the funding.
- The researchers who had contact with the participants.
- The individuals or objects studied and how they were selected.
- The exact nature of the measurements made or questions asked.
- The setting in which the measurements were taken.
- Differences in the groups being compared, in addition to the factor of interest.
- The extent or size of any claimed effects or differences.
So, in light of #7, you should to read the actual study, and not a summarized interpretation in a newspaper. Here are some notable excerpts from the actual study:
- …treatment effect estimates are not significant the first year. The estimates are negative in the high school study and near zero in the middle school study.
- …the magnitude is sufficient to improve the average student’s performance by approximately eight percentile points. Consider a student who would score at the 50th percentile in the control group; an effect size of 0.20 is equivalent to having that student score at the 58th percentile if they were in the treatment group.
So, when you read the fine print, you learn that scores actually go down in the first year, and the improvement may not be as large you the article led you to believe.