Showing posts with label high-stakes testing. Show all posts
Showing posts with label high-stakes testing. Show all posts

Sunday, December 18, 2011

Test Scores Often Misused in Policy Decisions

This from the Huffington Post:

Education policies that affect millions of students have long been tied to test scores, but a new paper suggests those scores are regularly misinterpreted.
According to the new research out of Mathematica, a statistical research group, the comparisons sometimes used to judge school performance are more indicative of demographic change than actual learning.

For example: Last week's release of National Assessment of Educational Progress scores led to much finger-pointing about what's working and what isn't in education reform. But according to Mathematica, policy assessments based on raw test data is extremely misleading -- especially because year-to-year comparisons measure different groups of students.

"Every time the NAEP results come out, you see a whole slew of headlines that make you slap your forehead," said Steven Glazerman, an author of the paper and a senior fellow at Mathematica. "You draw all the wrong conclusions over whether some school or district was effective or ineffective based on comparisons that can't be indicators of those changes."

"We had a lot of big changes in DC in 2007," Glazerman continued. "People are trying to render judgments of Michelle Rhee based on the NAEP. That's comparing people who are in the eighth grade in 2010 vs. kids who were in the eighth grade a few years ago. The argument is that this tells you nothing about whether the DC Public Schools were more or less effective. It tells you about the demographic."
Those faulty comparisons, Glazerman said, were obvious to him back in 2001, when he originally wrote the paper. But Glazerman shelved it then because he thought the upcoming implementation of the federal No Child Left Behind act would make it obsolete.

That expectation turned out to be wrong. NCLB, the country's sweeping education law which has been up for authorization since 2007, mandated regular standardized testing in reading and math and punished schools based on those scores. As Glazerman and his coauthor Liz Potamites wrote, severe and correctable errors in the measurement of student performance are often used to make critical education policy decisions associated with the law.

"It made me realize somebody still needs to make these arguments against successive cohort indicators," Glazerman said, referring to the measurement of growth derived from changes in score averages or proficiency rates in the same grade over time. "That's what brought this about." So he picked up the paper again.

NCLB requires states to report on school status through a method known as "Adequate Yearly Progress." It is widely acknowledged that AYP is so ill-defined that it has depicted an overly broad swath of schools as "failing," making it difficult for states to distinguish truly underperforming schools. Glazerman's paper argues NCLB's methods for targeting failing schools are prone to error.

"Don't compare this year's fifth graders with last year's," Glazerman said. "Don't use the NAEP to measure short-term impacts of policies or schools."

The errors primarily stem from looking at the percentage of students proficient in a given subject from one year to the next -- but it measures different groups of students from year to year, leading to false impressions of growth or loss.

Hat tip to the Commish.

Thursday, December 8, 2011

If We Tested School Board Members...

Over the years, finding a Kentucky school board member (or Trustee) who wasn't very smart was a fairly simple task. Well-educated Trustees were apparently the exception. But things have changed over the decades, and today's school board members are generally among the better educated citizens in most communities.

But what would happen if they had to be tested the same way students are - say, by taking the 10th grade exams? Well, that's never going to happen, right?

In Florida, it did.

This from Marion Brady in the Answer Sheet and here:

When an adult took standardized tests forced on kids
A longtime friend on the school board of one of the largest school systems in America did something that few public servants are willing to do. He took versions of his state’s high-stakes standardized math and reading tests for 10th graders, and said he’d make his scores public.

By any reasonable measure, my friend is a success. His now-grown kids are well-educated. He has a big house in a good part of town. Paid-for condo in the Caribbean. Influential friends. Lots of frequent flyer miles. Enough time of his own to give serious attention to his school board responsibilities. The margins of his electoral wins and his good relationships with administrators and teachers testify to his openness to dialogue and willingness to listen.

He called me the morning he took the test to say he was sure he hadn’t done well, but had to wait for the results.

Turns out the board member was quite a fella.
The man in question is Rick Roach, who is in his fourth four-year term representing District 3 on the Board of Education in Orange County, Fl., a public school system with 180,000 students. Roach took a version of the Florida Comprehensive Assessment Test, commonly known as the FCAT, earlier this year...

Roach, the father of five children and grandfather of two, was a teacher, counselor and coach in Orange County for 14 years. He was first elected to the board in 1998 and has been reelected three times. A resident of Orange County for three decades, he has a bachelor of science degree in education and two masters degrees: in education and educational psychology. He has trained over 18,000 educators in classroom management and course delivery skills in six eastern states over the last 25 years....

Now in his 13th year on the board, he had considered taking the test for a while as he began to increasingly question whether the results really reflected a student’s ability. He was finally pushed to do it earlier this year, he said, after a board meeting at which the chairman listed five goals, and one of them caught his attention for being so unremarkable.

Roach said: ‘He [the chairman] said that by 2013 or 2014, he wanted 50 percent of the 10th graders reading at grade level....I’m thinking, ‘That’s horrible.’ Right now it’s 39 percent of our kids reading at grade level in 10th grade. I have to tell you that I’ve never believed that that many kids can’t read at that level. Never ever believed it. I have five kids of my own. None of them were superstars at school but they could read well, and these kids today can read too.

“So I was thinking, ‘What are they taking that tells them they can’t read? What is this test? Our kids do okay on the eighth grade test and on the fifth grade test and then they get stupid in the 10th grade?”...
Here's his take on the experience.
“I won’t beat around the bush. The math section had 60 questions. I knew the answers to none of them, but managed to guess ten out of the 60 correctly. On the reading test, I got 62% . In our system, that’s a ‘D,’ and would get me a mandatory assignment to a double block of reading instruction.

“It seems to me something is seriously wrong. I have a bachelor of science degree, two masters degrees, and 15 credit hours toward a doctorate. I help oversee an organization with 22,000 employees and a $3 billion operations and capital budget, and am able to make sense of complex data related to those responsibilities....

“It might be argued that I’ve been out of school too long, that if I’d actuall y been in the 10th grade prior to taking the test, the material would have been fresh. But doesn’t that miss the point? A test that can determine a student’s future life chances should surely relate in some practical way to the requirements of life. I can’t see how that could possibly be true of the test I took.”