Tuesday’s NY Times featured a commentary whose author promotes artificial intelligence, big data, and “personalized” learning—algorithm-driven computer programs said to tailor learning to a student’s needs and interests—not only for reinventing education but for powering a new war on poverty. It is a glowing article framed as problem-solving: “Poverty, of course, is a multifaceted phenomenon. But the condition of poverty often entails one or more of these realities: a lack of income (joblessness); a lack of preparedness (education); and a dependency on government services (welfare). A.I. (artificial intelligence) can address all three.”
Clearly the author, Elisabeth A. Mason, the founding director of the Stanford Poverty and Technology Lab, isn’t a fan of dependency on government programs to provide support for people trapped in poverty, and she believes big data and artificial intelligence can match people who are out of work to, “good middle-class jobs that are going unfilled. Today there are millions of such jobs in the United States…. A.I can predict where the job openings of tomorrow will lie, and which skills and training will be needed for them.” Mason adds: “(B)ig data promises something closer to an unbiased ideology-free evaluation of the effectiveness of… social programs.”
Mason denigrates what she believes is our education system: “We bundle students into a room, use the same method of instruction and hope for the best.” The solution? A.I. tutors that, “can home in on and correct for each student’s weaknesses, adapt coursework to his or her learning style and keep the student engaged.” The so-called tutors are computers—not human beings. Mason continues: “Today’s dominant type of A.I., also known as machine learning, permits computer programs to become more accurate—to learn, if you will—as they absorb data and correlate it with known examples from other data sets. In this way, the A.I. ‘tutor’ becomes increasingly effective at matching a student’s needs as it spends more time seeing what works to improve performance.”
I am skeptical about Mason’s brave new world even though I acknowledge the value of social research that employs big data. Stanford’s Poverty & Technology Lab is tiny part of much larger collaborative, cross-discipline work at the Stanford Center on Poverty & Inequality, where sociologist Sean Reardon has been using big data to inform us about economic inequality and its effects on students’ educational achievement. My skepticism does not extend to Reardon, who has done more to help us understand the impact of poverty on children than perhaps any other social scientist. With the kind of big data Mason advocates, Reardon has found a way (here and here) to document nearly a half century of growing residential resegregation by family income across America’s metropolitan areas along with a widening academic achievement gap that reflects children’s segregation by income. And recently, Reardon published a new study, once again based on big data, that teases out the impact of family and neighborhood factors in education from other indicators of the quality of a community’s schools. All this helps define the scope of our social problem of widening and deepening hypersegregation by race and poverty across America’s communities and schools, but so far, at least, it has helped us face neither the logistical challenge of what to do nor the moral problem of motivating our society to want to do something. Here in metropolitan Cleveland, Ohio, for many years I have been watching the growth of interstate highways that take the wealthy farther and farther into white, outer-ring suburbs. Reardon’s data helps me see the phenomenon in a new and more structural way, but so far nobody seems to know what to do or how to develop the political will to stop our economic segregation.
It would be helpful if big data and artificial intelligence could help us with a new war on poverty, but once again, the challenge is not so much a matter of the technical capacity to measure the problem. Despite that politicians today do not even mention poverty, its depth and growth are well documented. The Washington Post recently reported on a new study from the United Nations on U.S. poverty: “With welfare reform in 1996, poor single parents with children now have a lifetime limit of five years of assistance and mandatory work requirements… The number of families on welfare declined from 4.6 million in 1996 to 1.1 million this year. The decline of the welfare rolls has not meant a decline in poverty, however. Instead, the shredding of the safety net led to a rise in poverty. Forty million Americans live in poverty, nearly half in deep poverty—which U.N. investigators defined as people reporting income less than one-half of the poverty threshold. The United States has the highest child poverty rates—25 percent—in the developed world… Declining wages at the lower end of the economic ladder make it harder for people to save for times of crisis or to get back on their feet. A full-time, year-round minimum wage worker, often employed in a dead-end job, falls below the poverty threshold for a family of three and often has to rely on food stamps.” Poverty in America remains politically and morally invisible despite the presence of big data.
And finally there is the proposal that our society can personalize education with A.I. tutors—computers driven by algorithms said to respond to children’s prompts with material that addresses their educational needs and feeds their interests. Yesterday this blog explored the education philosopher John Dewey’s 1897 pedagogic creed. Dewey believed that education is not merely for the kind of individual intellectual growth that is promoted by advocates for so-called “personalized” learning. The school, as the place where the student works with teachers and peers, is also instrumental for socializing human beings. The school and the family are the social institutions where, through relations with others, children learn to be moral beings and citizens: ” “I believe that the only true education comes through the stimulation of the child’s powers by the demands of the social situations in which he finds himself. Through these demands he is stimulated to act as a member of a unity, to emerge from his original narrowness of action and feeling and to conceive of himself from the standpoint of the welfare of the group to which he belongs. Through the responses which others make to his own activities he comes to know what these mean in social terms.” “(E)ducation is a regulation of the process of coming to share in the social consciousness.” “This process begins unconsciously almost at birth, and is continually shaping the individual’s powers, saturating his consciousness, forming his habits, training his ideas, and arousing his feelings and emotions. Through this unconscious education the individual gradually comes to share in the intellectual and moral resources which humanity has succeeded in getting together… The most formal and technical education in the world cannot safely depart from this general process.”
Nobody ever really pinned down the meaning of Arne Duncan’s cliche that our schools must stop being “trapped in the 20th century.” John Dewey’s long life spanned the last half of the 19th and the first half of the 20th centuries. I hope nobody will try to tell me that Dewey’s wisdom is just so “yesterday.”
This is a necessary critique of AI based education, but not sufficient. Yes, Dewey would roll over in his grave at this approach to children, as it ignores the democratic experience that schools should offer. But Jerome Bruner and most developmental psychologists would be equally dismayed. Bruner, Chomsky and many others understood that learning, especially language development, occurs in a social context, not in digital isolation. Human spontaneity, facial expressions, humor, warmth, affection, excitement and other factors are crucial elements of learning. An AI approach may be temporarily effective in a particular skill development, but that is not “learning” or education. It is at great peril that we fall for the false idea that humans are digital devices and that bits and bytes will satisfy the natural need for learning and discovery. Children are organic, living, dynamic, loving, laughing, full human beings. Treating them like automata and programming them for vocation is offensive. And, as several centuries of constantly developing knowledge about learning demonstrates, it is also dreadfully ineffective. It also denies children the developmental experiences that are most likely to inspire lives of meaning, filled with joy, beauty, creativity and yes, sadness too. I want my grandchildren to be fully alive, not trained like a laboratory animal. Schools would be far better off with no computers whatsoever.
Thank you, Jan, for this crucial piece exposing the misguided mindset of those who advocate algorithmically mediated “teaching/learning” via digital modules in place of living, perceptive, knowledgeable, and compassionate teachers. The misuse of the word “personalized” is egregious. When most people think of personalized, they imagine teachers who truly listen to their students and respond to them individually as human beings. This is completely different from having a computer algorithm instantly fart out the next question based on the student’s answer to the last one. It is not engaging. Besides being pedagogically counter-productive, digital learning exposes children to excessive screen time that acculturates them to this alienating process as normal. Screens with their bells and whistles are inherently mesmerizing, but literally harmful to children’s physical, mental, and emotional health. Also, children (and their parents, for the most part) do not realize that they are being surveilled by every keystroke. This is the same process that produces the lucrative targeted advertising that corporations rely on and that have been abused in our electoral system. For a superb explication of the dangers of algorithmic decision making, see Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil. For an understanding of how computer-adapted digital learning corrupts the administrators of school districts, undermines the true education of children, and wastes taxpayers’ money that is desperately needed for making school buildings safe for humans, get the real story from Baltimore County parents on the tragic fiasco that’s been happening in the schools there. https://statusbcps.wordpress.com/ For an in depth foundation to understand the dangers of “personalized” learning, AI, and related technological innovations, as well as Social Impact Bonds/Pay for Success see Alison Hawver McDowell’s blog Wrench in the Gears and Emily Kennedy Talmage’s blog Save Maine Schools.
If you are not factoring in “pay for success” and “social impact finance” into this narrative, you are missing the bigger picture. See this article on PFS by the author of the article you reference. Be very careful. There is a lot of profit in poverty and Stanford is central to that enterprise. https://ssir.org/articles/entry/facts_over_factions
Reblogged this on Mister Journalism: "Reading, Sharing, Discussing, Learning".
Some wag once said, “There’s nothing new under the sun.” I remember a neighboring school district from where I taught adopted for elementary children PLAN, Programmed Learning According to Needs. It was all based on an individual going through lessons on his or her own, and the teacher just monitored their “progress.” The school district wisely threw it out in two or three years. This kind of learning is unnecessarily lonely by design. That’s not education, absorbing facts and data. There were no literature circles where children discussed Great Books and stories. Here the PLAN idea comes around again, forty years later.