I suspect that Glenn Youngkin, the governor of Virginia, knows very little, really, about public education. He was an investment banker before he became a politician, and his children attend the elite, private Georgetown Prep. But Youngkin knows how to build political capital by frightening parents and the general public about so-called failures in the state’s public schools. He campaigned last year by promoting the racist idea that parents need more control over their kids’ schools to prevent the children’s being frightened or upset by the injustices that have scarred American history. And now, he has begun using test score data to try to paint the state’s public schools as failing.
The problem is that this time, as he tries to use the state’s scores on the “nation’s report card,” the National Assessment of Education Progress ( NAEP), to prove there is something drastically wrong with Virginia’s public schools, he and his so-called experts who just castigated the state’s schools in a new report seem to have misread the meaning of the test scores they denigrate. Youngkin’s claim is that too few Virginia students achieve the “proficient” cut score on the NAEP.
For the Washington Post, Hannah Natanson and Laura Vozzella report: “The Virginia Department of Education painted a grim picture of student achievement in the state in a report released Thursday, asserting that children are performing poorly on national assessments in reading and math and falling behind peers in other states. The 34-page report on students’ academic performance, requested as part of Gov. Glenn Youngkin’s first executive order, says these trends are especially pronounced among Black, Hispanic and low-income students. The report further critiques what it calls school districts’ lack of transparency regarding declining student performance—and it laments parents’ ‘eroding’ confidence in the state’s public schools.” The Youngkin administration’s new report contends that Virginia has been expecting too little of its public school students—that, while Virginia’s state test, the Standards of Learning or SOL, shows the state’s students are doing well, Virginia’s NAEP scores show the states’ students are not really “proficient.”
But Youngkin’s report ignores years of discussion about what the “proficient” achievement level on the National Assessment of Educational Progress really means. In her 2013 book, Reign of Error, Diane Ravitch who once served on the NAEP’s Governing Board, took the trouble to explain: “All definitions of education standards are subjective… People who set standards use their own judgment to decide the passing mark on a test. None of this is science.” Ravitch explains further precisely how the NAEP Governing Board has always defined the difference between the “proficient” standard and the “basic” standard: “‘Proficient’ represents solid achievement. The National Assessment Governing Board (NAGB)… defines it as ‘solid academic performance for each grade assessed. This is a very high level of academic performance. Students reaching this level have demonstrated competency over challenging subject matter, including subject matter knowledge, application of such knowledge to real-world situations, and analytical skills appropriate to the subject matter.’… ‘Basic,’ as defined by NAGB, is ‘partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at each grade.'” Ravitch concludes that according to the NAEP standard: “a student who is ‘proficient’ earns a solid A and not less than a strong B+” while “the student who scores ‘basic’ is probably a B or C student.” (Reign of Error, p. 47)
Daniel Koretz, a Harvard University expert on the construction of standardized tests and their uses for high stakes school accountability devotes an entire chapter of his 2017 book, The Testing Charade, to the topic, “Making Up Unrealistic Targets.” Koretz describes exactly how Glenn Youngkin appears to be manipulating the meaning of NAEP cut scores as an argument for blaming the schools and pressuring educators to prep students to improve test scores at any cost: “In a nutshell, the core of the approach has been simply to set an arbitrary performance target (the ‘Proficient’ standard) and declare that all schools must make all students reach it in an equally arbitrary amount of time…. (A)lmost all public discussion of test scores is now cast in terms of the percentage reaching either the proficient standard, or occasionally, another cut score… This trust in performance standards, however is misplaced… (I)n fact, despite all the care that goes into creating them, these standards are anything but solid. They are arbitrary, and the ‘percent proficient’ is a very slippery number.” (The Testing Charade, pp. 119-121)
Natanson and Vozzella report that Virginia’s educators immediately pushed back against Youngkin’s new report: “The superintendent of Alexandria City Public Schools, Gregory C, Hutchings Jr., said the report inspired him to navigate to the NAEP website, where he discovered that Virginia students had consistently scored above the national average. ‘So, I’m not really understanding the whole premise of this report…. (which) was around us performing so much lower than everyone else.'”
Fortunately, last Friday, right after Youngkin’s report was released, the Washington Post‘s Valerie Strauss published a column by James Harvey, the recently retired executive director of the National Superintendents Roundtable. Harvey scathingly criticizes the National Assessment Governing Board (NAGB) for its confusing definition of “proficient.” Like a lot of federal policy after Reagan’s 1983, A Nation at Risk report, which blamed the public schools for widespread mediocrity and and became the basis for standards-based school reform, the NAGB set its proficiency targets to drive higher expectations. Harvey writes: “Proficient doesn’t mean proficient. Oddly, NAEP’s definition of proficiency has little or nothing to do with proficiency as most people understand the term. NAEP experts think of NAEP’s standard as ‘aspirational.’ In 2001, two experts associated with NAGB made it clear that: ‘The proficient achievement level does not refer to ‘at grade’ performance. Nor is performance at the Proficient level synonymous with ‘proficiency’ in the subject. That is, students who may be considered proficient in a subject, given the common usage of the term, might not satisfy the requirements for performance at the NAEP achievement level.”
Harvey summarizes the decades-long controversy about National Assessment of Educational Progress cut scores: “What is striking in reviewing the history of NAEP is how easily its policy board has shrugged off criticisms about the standards-setting process. The critics constitute a roll call of the statistical establishment’s heavyweights… (T)he likes of the National Academy of Education, the Government Accounting Office, the National Academy of Sciences, and the Brookings Institution have issued scorching complaints that the benchmark-setting processes were ‘fundamentally flawed,’ ‘indefensible,’ and ‘of doubtful validity,’ while producing ‘results that are not believable.'”
Harvey continues: “How unbelievable? Fully half the 17-year-olds maligned as being just basic by NAEP obtained four-year college degrees. About one-third of Advanced Placement Calculus students, the creme de la creme of American high school students, failed to meet the NAEP proficiency benchmark. While only one-third of American fourth-graders are said to be proficient in reading by NAEP, international assessments of fourth-grade reading judged American students to rank as high as No. 2 in the world. For the most part, such pointed criticism from assessment experts has been greeted with silence from NAEP’s policy board.”
In her introduction to Harvey’s piece, Valerie Strauss explains: “Youngkin isn’t the first politician to misinterpret NAEP scores and then use that bad interpretation to bash public schools.” Please do read Strauss’s introduction and James Harvey’s fine column to better understand how high stakes standardized testing has been used politically to drive a kind of school reform that manipulates big data but has little relevance to expanding educational opportunity.