November 2006 | Volume 64 | Number 3 NCLB: Taking Stock, Looking Forward Pages 53-57 Turmoil in the Testing Industry Thomas Toch As the testing industry buckles under the weight of NCLB's testing demands, states are opting for fast and cheap assessments that focus on basic skills. Standardized achievement tests are crucial to No Child Left Behind's school reform effort because the legislation requires states to use these tests to measure whether students meet state standards. When insufficient percentages of students pass state tests, schools are judged as failing to make adequate yearly progress. And if they fail to make sufficient progress two or more years in a row, schools—and the educators who work in them—face increasingly severe consequences. At the heart of this accountability system is extensive testing of students in grades 3–8 and in one high school grade in reading and math. Because schools tend to teach what's tested—especially when the test scores have consequences for teachers and principals—the content of the tests required by No Child Left Behind (NCLB) has become the focus of teaching and learning in public school classrooms throughout the United States. That focus would be fine if states administered tests that measured the sorts of skills and knowledge that would lead to a first-class education for every public school student, the result that NCLB advocates have asserted the law would produce. But states don't usually administer those kinds of tests. The magnitude of NCLB's testing requirements, the law's demanding deadlines, insufficient federal funding, and other factors have produced a different result: Many states have adopted tests that can be constructed quickly and inexpensively. These tests primarily measure low-level skills, such as recall and restatement of facts, at the expense of synthesis, analysis, and other higher-order skills. Educators increasingly are focusing on the same low-level reading and math skills in their classrooms. NCLB's goal is to raise instructional standards by requiring states to set challenging expectations for what students should know and be able to do. But many of the tests that states have introduced under NCLB are leading instruction in the opposite direction. Heavy Demands on the Testing Industry Creating high-quality tests is difficult and labor-intensive. The process involves determining the length and content of a test, hiring curriculum experts to write questions, and ensuring that the questions align with state standards. Test makers field-test the test items on thousands of students to ensure that these items don't discriminate against groups of students but do discriminate between strong and weak students, a complex mathematical task. Test makers also have to ensure that every multiple-choice question has only one correct answer and that the questions reflect an appropriate range of difficulty. They must perform another complex mathematical computation to ensure that the same scores on different tests represent the same November 2006 Page 1 of 5ASCD11/27/2006http://www.ascd.org/portal/site/ascd/template.MAXIMIZE/menuitem.459dee008f99653f...level of performance. Then the tests must be edited, printed, and distributed. It's a demanding process under the best of circumstances, and this complex test-making infrastructure is buckling under the weight of NCLB's testing demands. Moreover, the need to align tests to state standards forced the testing industry to custom-build the majority of the tests that were scheduled to be in place at seven grade levels in every state inspring 2006. And because a growing number of states release portions of their tests to the public after administering them each year, testing companies have to generate vastly larger pools of credible test questions and do so far more quickly. Many in the industry say that they can't find enough qualified people to do the work. The surge in state testing under NCLB has created a severe shortage of psychometricians—the specialists who do the heavy statistical lifting in test making. Only a handful of these experts, who are trained in measurement theory and statistics at the University of Iowa, Michigan State, the University of Massachusetts, and a dozen or so other colleges, enter the workforce each year. Testing companies also face immense pressures at the back end of the testing cycle. In the pre-NCLB era, states and school systems gave testing companies months to score standardized tests because the results rarely had immediate consequences. Now, completed answer sheets are routed from schools to testing company scoring centers, where results are tabulated and then uploaded directly to state education department or school system computers. State agencies must then analyze the results, grade schools and school systems on the basis of whether sufficient percentages of their students as a whole and in every subgroup have met state standards on the tests, and package the ratings in reports that NCLB requires them to supply to school systems. School systems, in turn, must route the state ratings to schools and parents. All these reports must be completed in time for parents to place their children in tutoring or in different public schools before the start of the next school year, an opportunity that NCLB grants students in schools that fail to make adequate yearly progress. With many schools starting in August, the entire testing and state rating process must be completed by mid-July in many places—only three or four weeks after the end of the typical public school year. It would be difficult enough to successfully complete this process with long time lines. But many state policymakers, under pressure to give students as much time as possible to prepare for NCLB's high-stakes tests, are demanding that schools administer tests late in the school year. Lobbying by local educators persuaded the Ohio legislature in 2005 to move the state's two-week testing window from March to May, beginning in 2007. The legislature also mandated that Ohio's testing contractors—the American Institutes of Research and Measurement Incorporated—report scores on the tests by June 15—two weeks earlier than in the past. Some states want even quicker turnarounds. Michigan fired Measurement Incorporated in 2005 for months-long delays in scoring the state's tests. Pearson Educational Measurement, the state's new contractor, is required to get test
View Full Document