Please visit the new home of Majikthise at bigthink.com/blogs/focal-point.

« Pope may ditch evolution | Main | Major Owens blasts Jack Murtha »

August 31, 2006

SAT panic

Matt Yglesias makes the following observations:

1) The Washington Post Company, a "diversified media and education company" owns the Kaplan test-prep corporation.

2) The Washington Post's coverage of the decline in SAT scores in 2006 seems a little overwrought.

Personally, I suspect that the Post's coverage of the dip in SAT scores has more to do with sloppy thinking than conflict of interest. We're talking about a seven-point drop in 2006 scores compared to 2005. This decline is not surprising, considering that the College Board changed the test, making it longer and harder.

The Washington Post isn't the only media outlet covering the SAT score story, and I'm not sure whether its coverage is more alarmist than that of other papers. Do any other media conglomerates own test-prep companies? If so, are these papers also more likely to play up the SAT story compared to papers that don't have a financial interest in test prep?

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c61e653ef00d834e269b269e2

Listed below are links to weblogs that reference SAT panic :

Comments

No, of course not. That's why we have nameless editors and executive editors and publishers to handle these intricate matters. Oh, and that is a combined score drop of 7 points. For reading 0.98% decline, for math a 0.38% decline. Total drop after the test change: 0.68%.

Look out, we are getting stupid real fast - probably from reading the WAPO.

Or we could call it a PR release.

That's why I read People, they are more honest about their shilling.

The test is different each year, and has tended to become more difficult over time. Because it changes every year, it can't be used to track short-term trends in education.

There were huge structural changes in the SAT between 2005 and 2006. A big shift, even by SAT standards. This year they phased in an essay writing component for first time. As Matt notes on TPM, it makes sense that students faced with a whole new component might have less time left over for drilling math and syllogisms.

Also, they heavily normalize the scores each year, secretly as I recall. Once you get through that, I'm not sure there's *any* meaning to an average drop in scores.

Students did not do “worse” on this new test. You can’t compare different tests like that.

The whole point of the new test is that it will differentiate among students in a way that is intended to be more predictive of college performance. If every student were to score the same on the new test as on the old test, then the new test would be a failure.

The new test doesn’t just add writing. Math and reading have been revamped. As the College Board website says, Since the test is different, performance of individual students will be different. Some people were just great at analogies. But the College Board decided that they were more a like a puzzle-solving skill, not really a measure of comprehension, so they’re gone. If you whip through analogies, your score would be higher on the old test. If you are slow at them, you will do better on the new test.

On the other hand, college admissions departments need continuity. So scoring must be scaled so that the shape of the bell curve of scores does not change much – that is, the average remains the same, and the percentage of test-takers in each range of scores is about the same. And of course there is the public relations angle. It would be a disaster if the new test were seen as “harder” or “easier” than the old test.

So the fact that the average scores on the new test are almost identical to those on the old test has nothing to with the students, and everything to do with the test developers. They didn’t get the new test to score on average precisely the same as the old test. But they came very close.

Did you ever see “Spinal Tap”? You remember the amps that went to eleven? That’s what this debate is about.

Correction- the College Board doesn't say that individual scores will be different. I originally put in a quote from their website, then took it out to shorten the post- but left the intro phrase in. So now I'm doubling the length of the post with a stupid correction.

Setting aside the WaPo controversy...

I began to read before elementary school and was by just about every teacher I had from K-12 as a "born writer." Yet I always bombed standardized exams. I mean, I seriously bombed them. One thing that always bewildered me was the word analogy section:

"Butte" is to "cloud" as "cement block" is to _________.

a. aardvark
b. stationery
c. pork chop
d. cowboy hat

In later years I wondered, "Do people who write and write well ever think of language this way?" It seemed to me the analogies component was devoid of the passion of words, the historical resonances of words, the sheer rhetorical value of choosing words that complement one another in terms of sound and sense.

So I concluded that standardized tests were crap.

I was a whiz at the analogies section. The right answer always seemed obvious to me. It's like the way some people can solve the NY Times crossword in 2 minutes, and others will spend 2 hours and still have a bunch of blank spaces down in the lower left. That's a pretty impressive skill but it's not likely to predict success in college.

Now they've dumped the analogies, and added a writing section. So on the new test, Danton would score higher and I would score lower.

JR:

Since I was never any good at math, I never made any effort on those sections. I would simply fill in the boxes randomly and save my energy for the verbal section. Oddly enough, I generally scored much, much higher on the math sections. On one test I broken 700, if I recall (it's been three decades since I took an SAT).

As for the writing section, one doesn't write well, one rewrites well, so I still hold that standardized tests are crap.

One year means nothing.
Changes in the composition of students taking the test (test scores go down as more people take the test, go up when most people have no hope of attending college), composition of test materials (schools have spent more time on reading and less on writing, in part because standardized tests have emphasized reading questions), all make a difference.
What's of interest is the trend over several years, not any single year. A few other points - with over a million takers, this is a statistically significant change in scores, but still meaningless. Also, scores are equated year to year, not secretly, but to make sure that the year you took the test doesn't work for or against you. That's what makes the ACT and SAT modestly interesting as markers for how education changes are working in the US (not particularly well, I understand - there's an interesting article in the NYTimes about community colleges and the number of students needing remedial work).

The comments to this entry are closed.