HI, again
I updated my “H(uman) I(ntelligence) not AI” presentation that I put into all of my classes.
I have been both encouraged and discouraged over the past year as I have read more about genAI, especially in relation to higher education. Every time I think about writing here on Substack about genAI, I topple between two extremes: “of course enough people understand that genAI is an affront to their humanity” is one extreme and “people don’t seem to care about the damage that it’s doing to us and the environment and seem to think it’s something they have to accept.”
I see many people writing about how our humanity is worth fighting for. Here are a couple recent ones:
Émile P. Torres explains AI’s various mishaps in, “Why You Should Never Use AI Under Any Circumstances for Any Reason No Matter What.”
Brad Montague explains “Things You Can Do That A.I. Cannot Do.”
John Warner’s book More Than Words: How to Think About Writing in the Age of AI waltzed into my reading list and into the heart of my reasoned intellect with a gait reassuring me that the higher ed I want to defend is worth defending.
And yet I see many people, some at institutions of higher education, ignoring the damage and asserting that genAI is a necessary teaching tool. There is a place for teaching about or even with genAI in higher ed, but it ought to be limited in scope, and it is not in my classroom. Higher ed has more valuable goals than teaching proper prompts into peonage.
People emphasize that genAI produces ideas for getting over a creative hump or can do quantitatively what no human can do. GenAI does not think. The so-called intelligence does not think. But it does “hallucinate” (i.e., lie) and it can double-down on its lies. It has created, that is, forged, documents to back up a genealogical lie it told earlier. It has made up case citations for lawyers lazy enough to try to submit them to a court, and luckily the falsities were recognized. (Can you imagine if fake case law gets entered into a real legal case?)
One should be able to say, “genAI tells lies” and that ought to be the end of the discussion of whether it should be used in historical research or in the teaching of history.
At least Western Connecticut State University’s Library gets it:
Beware: ChatGPT Can Hallucinate!
ChatGPT is a Large Language Model. It works by predicting the next word based on what words came before - in other words, the context of the moment. It is not capable of judging the accuracy of what it is providing at that moment. Just as autocorrect often fails at predicting the right word, so can ChatGPT fail - but on a larger scale.
ChatGPT is prone to what are being called “hallucinations” – information that is completely made up. Unfortunately, hallucinatory information can look convincing and can even come with nonexistent citations (see the article from The Guardian in the page “Associated Sources for “Hallucinations’”).
ChatGPT and Scholarly Work
For scholarly research purposes, ChatGPT is unreliable. Often, when queried further, ChatGPT might recognize and apologize for its error. When corrected, it may change its answer to a correct one; other times it might provide a different incorrect answer.
Scholars who have experimented with ChatGPT agree that the tool can be useful in some applications, but only if one is familiar enough with the information it provides to be able to certify its accuracy. In other words, its information should not be taken at face value, but carefully evaluated. For this reason, its use poses a risk for students who are in the midst of attempting to master a subject and may not be capable of judging the accuracy.
A person needs expertise to evaluate genAI output. A person cannot build expertise by learning from genAI. Why? For one, because it does not think. For another, because it lies.
I pride myself on saying to students, “I don’t know,” when I don’t know. Students know they can trust me because after I say, “I don’t know,” I add, “but we can talk about how we might think through the question to know where to look for an answer, or how to find or come up with an answer.” Because we are sentient thinking beings.
“You can bind up my leg, but not even Zeus has the power to break my freedom of choice.” - Epictetus, Discourses, 2.10.1
Nothing is inevitable. We can choose how we relate to genAI. My presentation is my imperfect explanation for why it is not in my classroom.
I am encouraged this year by an early result of the update presentation: a student took the time to make an image of her own.




Jeanne, I LOVE your thoughts and presentation on Human Intelligence (HI) over AI! I am going to send you an email with more sources that may be of interest to you and your students on this topic of the use of AI and "cognitive offloading" that researchers are studying currently. Basically when students, educators and information consumers in general use GAI without any clear goals/framework for deeper learning or without application of critical research skills = downhill trajectory for memory and cognitive functioning. And then there's AI and economic/workforce impact, social-emotional impact, political and environmental impact and much much more. Researchers of AI across disciplines including historian Yuval Harari and Elaine Hao call for more government regulations on the use of AI - otherwise, the already dire disparities and injustices that already exist across the globe will intensify exponentially.
I see it as a hindrance to developing essential critical thinking skills that go beyond the classroom, but apply to life