“Resist much, obey little,” wrote Walt Whitman in Leaves of Grass.
Foreword
Once again I found myself simultaneously reading very different books that gave me very different impressions. I have put out into the world a presentation that addresses how I view the use of AI in studying history with me in my classes. The presentation is called, “HI: Human Intelligence” (Spoiler, I don’t want to see AI in my classes. I want to teach human brains. Yes, I want brains.) I have shared that presentation here on Substack.
Today this little essay is about some of the books on AI in general and in education specifically that I read in the making of that presentation. These are my impressions of three of those books.
Book the First
I watch horror films to relax. The gore, the end-of-the-world scenarios, zombies, vampires, you name the monster, death, destruction, are all interesting and fascinating objects of inquiry. They, if studied correctly, can provide insight into contemporary societal angst, or if I don’t feel like studying them, they are still entertaining at their core. Fun, relaxing, and intellectually stimulating are horror films to me.
Salman Kahn’s Brave New Worlds, however, scares the hell out of me. Titled, without irony evidently, the book praises Artificial Intelligence in education. Praises is too weak a word. Its subtitle is, How AI Will Revolutionize Education (And Why That’s a Good Thing).
Let’s jump to the chapter called, “Conversing with History,” because that is what I teach. Here’s the opening paragraph:
Good history and civics teachers make the past interesting. Great [emphasis in original] history and civics teachers make the past come alive. When history and civics meet artificial intelligence, the past gets a voice, a perspective. Rather than a static time and place to study, it becomes a rich context to interact with.1
That is hardly what Faulkner meant when he said, “The past is never dead.”
With no more justification perceived as necessary than that teachers’ primary objective must be to make the past come alive rather than treating it as what it is, the past and the object of study for entire discipline, the chapter proceeds to lament excellent classroom and teaching techniques such as reading, “Socratic dialogue, debates, and simulations” in which students engage with each other and the teacher. Khan is dissatisfied with human interaction:
While this type of rich activity can help students engage deeper with the core content [i.e., the past, my addition], the lessons are not easy to plan or facilitate. And it is even harder to ensure that every student is fully engaged and to assess the quality of that engagement.
Enter Khanmigo. Every one of those activities can now happen directly between a student and the AI.2
[Insert expletives here.]
Do you see a problem or two here? The author, however, sees no problem with this “Brave New World.” The rest of the chapter illustrates with example conversations between “a student and the AI,” and the AI dominates the conversation like a narcissist.
It seems to me that Khanmigo’s goal is to control. Do I wish I could “ensure that every student is fully engaged,” or that all the students even just did the reading? Sure, but they are human beings with choices to make, lives to live, and part of a student’s job is to learn how to find it within herself to want to learn. Khanmigo sounds like it’s meant to be that horrible micromanager who never trusts his employees.
Because it appears to me that this book is no more than a sales brochure for a product I’d never buy, I’m going to move on to more promising waters in the next book, but I felt it necessary to include this book because it is out there, it’s received praise from Bill Gates, and it might be influencing people.
Book the Second
Is AI unavoidable? Is AI inevitable? Is AI destined to change every task, job, profession a person could hope to master?
AI is useful. There is no doubt about that. It aids humans in many industries and will continue to do so. It will be prudent for everyone to become familiar with how it works, probably even many people ought to become familiar with how to use it and how to apply it in new ways.
The second book to discuss here is Teaching with AI: A Practical Guide to a New Era of Human Learning by José Antonio Bowen and C. Edward Watson.3
The most baffling thing about this book is the authors’ inability or unwillingness to deal with the obvious contradictions they punctuate in the book. They acknowledge the flaws and the bizarre about AI and then dismiss them as if they’re someone else’s problems. Here’s an example.
On page 18 a couple of flaws associated with AI seem to be in the authors’ sights. Let me walk you through their line of reasoning about bias and fictional outcomes.
Since the retraining sources (including the monthly Common Crawl dataset taken from billions of web pages) contains the good, bad, and ugly of human thought, and since LLMs assimilate to predict, they are bound to reproduce the bias and hate in their source material.4
“AI can also amplify [emphasis in original] the bias on the internet.”5 So, the advice given so far is, AIs are bound to reproduce bias and hate and they are likely to amplify it. Fantastic. (That was a sarcastic “fantastic,” in case you couldn’t tell.) But some try to correct for it, with fiction.
Adobe’s Firefly AI image generator tries to correct this. …Firefly has been trained to increase the probability that a request for an image of a Supreme Court justice in 1960 will be a woman, even though Sandra Day O’Connor became the first woman appointed to the Court in 1981.6
The authors, however, have a response for this completely fictional and false portrayal of reality.
Bias can come from training data, but the well-intentioned Firefly examples highlight another set of potential problems: human reviewers who rate and provide feedback for the models output also have bias. If AIs can create images of the world as it could be or as it is, who gets to choose?7
If I’m understanding this line of reasoning correctly, then they are saying that humans are the problem, especially if they wish reality to be portrayed accurately.
Yes, yes, that is exactly what they are arguing, because the next paragraph begins,
The G in GPT is for ‘generative.’ Because these models ‘generate’ new sentences, images, and ideas by sorting probabilities of a next word or pixel, they are prone to ‘generate’ false data or ‘fabricate’ fictional references. This ability to ‘hallucinate’ makes AI a terrific tool for creativity: it will pout ideas and words together in ways that human might never have done before…8
The “scare quotes” they use indeed ought to scare you.
I grew up with a schizophrenic mother; to put it bluntly, I learned early one does not reinforce or tolerate hallucination. And it’s not a “cute” word to deal with the fabrications that an AI can put out. “Hallucination” is not equivalent to “creativity.”
I took this example from early in the book, because I felt it demonstrated a kind of foundational attitude for the whole book, which, yes, I did read in its entirety, looking for something that I could use in my classroom.9 But the authors’ insistence on overlooking AIs’ deficiencies, in fact arguing for accepting them rather than questioning them, was a significant factor in me reaching the conclusion I presented in “HI: Human Intelligence.” The only way that I can make sense of these authors’ illogic is that they believe that AI is inevitable, will be ubiquitous, and therefore we must embrace it wholly as educators and we must not question that which is AI.
Questioning is critical thinking, which the authors acknowledge as something that “will remain essential,”10 though they appear unconcerned with how to teach it.
Book the Third
And then there’s this guy. Jaron Lanier. I’ve written about him before.
When I want explanations of social media, the internet, and so-called digital life in terms of what is going on technically, in terms of economics, ethical and moral decision-making, social and cultural impact, and just plain how to ensure one continues to be a human being in the midst of digitization, I turn to Jaron Lanier. In You Are Not a Gadget, he states plainly, “The most important thing to ask about any technology is how it changes people.”11 And then he pursues this line of inquiry with the kind of thoroughness and nuance that leaves you a better person for having read it.
He places his evaluation of technology’s significance squarely where it belongs: on humans. Humans design it, human choose how to use it, humans reap or suffer the consequences. I find especially fascinating when he touches on creativity. Creativity comes from humans, humans need meaning, and meaning depends upon context. A result of divorcing human creativity from its (real-life) context is (and this is me oversimplifying Lanier’s thoroughness and nuance) to make what a human creates worthless. Software like ChatGPT makes money for people on the backs of first-order creativity.12
Lanier is much more optimistic than I and he seeks ways to turn the ship (i.e., the internet) we are sailing — that is, to stay focused on humans’ ability to choose — toward a future in which human creativity is valued in concrete ways. I also want people not to lose their ability to think, to be creative, and thinking and being creative can be really, really hard to do.
While the authors of the first two books argue to avoid that which is difficult, such as when Bowen and Watson make this wholly unconvincing argument:
Creating tests is hard work; creating good tests is even harder work. Make it easier by giving AI some course materials and asking it to generate study guides, review questions, or exams of different types.13
As a professor who has had students express gratitude for the tests that help them understand the material better, I am personally offended by this dismissal of hard work. In contrast, Lanier offers a list of “‘what each of us can do’” to turn the ship. Among other thought-provoking suggestions, the list includes:
Create a website that expresses something about who you are that won’t fit into the template available to you on a social networking site.
Post a video once in a while that took you one hundred times more time to create than it takes to view.
Write a blog post that took weeks of reflection before you heard the inner voice that needed to come out.14
My own flaws
I started to write on Substack because I needed a different kind of creative outlet in a specific moment in my life. This imperfect expression of my thoughts about just three books feels incomplete. I invite you to view my HI presentation and to engage me in conversation so that we can all understand better.
Salman Khan. Brave New Worlds: How AI Will Revolutionize Education (and Why That’s a Good Thing). (Viking, 2024), 52.
Ibid., 53.
José Antonio Bowen and C. Edward Watson. Teaching with AI: A Practical Guide to a New Era of Human Learning. (Johns Hopkins University Press, 2024).
Ibid., 18.
Ibid.
Ibid.
Ibid., 18-19.
Ibid., 19.
I marked many more examples of what I find to be bizarre arguments for removing thinking from education, but I didn’t want to overwhelm this essay with critique and questions that might be perceived as negativity. And in my presentation, “HI: Human Intelligence” I raise one or two other points about this book.
Bowen and Watson, 36.
Jaron Lanier. You Are Not a Gadget: A Manifesto. (Knopf, 2010), 36. I find it hilariously ironic that a book with the subtitle “A Manifesto” provides a much more even-keeled assessment of the digital world than one with the subtitle “A Practical Guide.”
Lanier spoke at UC-Berkeley a year ago about Open AI and evidently he “emphasize[d] the importance of understanding AI algorithms. Although models can remember chat cycles, their memory can be unreliable.” And he “candidly addresse[d] the challenges faced when AI models, such as ChatGPT, generate responses that are dramatic, romantic, or even shocking.” One article about his talk can be found here: https://cio.ucop.edu/jaron-lanier-the-father-of-vr-addresses-tech-enthusiasts-at-uc-berkeley/; another here: https://cdss.berkeley.edu/news/jaron-lanier-wants-you-stop-saying-ai. The books by Lanier I’ve read touched on but predated the unleashing of software like ChatGPT into the mainstream, and it’s consoling to me to find that Lanier also seems suspicious of the so-called “hallucinations” that can come out of these softwares.
Bowen and Watson, 94.
Lanier, 21. The authors of Teaching with AI are concerned only with training students for jobs, not educating them. Lanier, by contrast, consistently emphasizes a person’s ability to think and to create. I know who I would prefer to emulate.