The Hallucinating Machines We Can't Live Without
An Algorithm Brought You To This Anti-Algorithm Essay

The discourse around generative AI technology is often locked into binaries or dualistic frameworks between doom and salvation, with little in between. What we really need are nuanced approaches that center human agency and experience about how new technologies complicate and alter our habits and skills. Unfortunately, what we’re getting in lieu of that is marketing pitches designed to go viral, mixed with some end of days narratives about the destruction of learning, knowing, and humanity. I think it is important to continually put our historical moment with technology into context, if only to show just how much work we have ahead of us as we navigate what it means to be human in this evolving digital era.
Panic and Hype over New Technologies is Not New
In 1984, Bradford Morgan and James Schwartz made the following prediction in The Future of Word Processing in Academic Writing Programs. Stop yourself if any of this phrasing feels familiar with our current AI discourse:
Word processing is becoming recognized as the operational standard in business and other forms of professional writing. It will continue to strengthen its hold in the future—and rightly so, because word processing, like other computer-facilitated efforts, exponentially extends a writer’s efficiency and capability. Is there any reason that our students should receive anything less than a state-of-the-art education in writing? Armed with this future-certain communication tool, won’t students indeed have the competitive edge?
Word processing will gradually become an important component of academic writing programs because of its ability to save time, conserve labor, stimulate production, encourage revision, and solve problems. Those who teach writing, however, need not fear that hard-won experience will become outdated. Traditional objectives and the body of writing theory will remain intact, but the computer has the potential to revolutionize the production of student manuscripts and allow for more “professional” use of faculty time.
Sound familiar? They also noted massive faculty resistance in a separate article that appeared that year: “Nevertheless, today there appears to be as strong an opposition to the introduction of computer technology into the writing process as there was over four hundred years ago to the introduction of the printing press into the bookmaking process.”
Some 41 years later, we could replace the word processor with AI and see similar arguments across social media.
Technological Innovation Isn’t Even, Equitable, or Inevitable
The technology that powers ChatGPT will always be error-prone and produce hallucinations. According to OpenAI’s recent report, Why language models hallucinate, tools like ChatGPT “will never reach 100% [accuracy] because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.”Maybe we should tell our students this? Would they care? Do you? It seems like the world has been primed to accept generative and now agentic tools and LLM’s uneven outputs as useful, even vital, regardless of flaws. Here it is, folks, AI—warts and all.
What an odd time for industries to rush to integrate a technology that even the largest company marketing it admits will never be totally accurate. Cue the rhetorical gymnastics and unsettling comparisons of machine intelligence to human intelligence. AI might mess things up, just like human beings, and we’ve accepted those limitations as part of reality. Organic and machine intelligence are most certainly not the same, but boy, does it make us feel good to think about a machine we’ve made to mimic language, to mimic us, as being imperfect just like we are. This may just end up being yet another technique to keep the public enamored with AI by embracing its inherent fallibility.
So why hasn’t AI fizzled out with the revelation that we will never have 100% accurate LLMs? One would imagine that hearing a company admit their product makes mistakes would cause venture capital to flee, not invest $100 billion like Nvidia’s recent announcement in partnership with OpenAI.
Model errors anthropomorphized as hallucinations aren’t exactly news in to machine learning community. Google largely understood the error rates were statistical outcomes from transformers and neural networks when they open sourced the technology powering LLMs back in 2018. Instead of abandoning technology that is error prone and always will be to a certain degree, shouldn’t we put it into historical conversation with past innovations that likewise proved imprecise but ultimately highly useful?
Imperfect but Useful is the Norm with Technology
I’m sure each of us who owns an Amazon device with Alexa or other smart assistants has yelled at it repeatedly on more than a few occasions. Innovations often take several generations before they improve usability and we comfortably adapt to them. Some of these errors are never addressed and become part of our cultural zeitgeist. The warm sound many audiophiles gravitate towards produced by vinyl records is the result of countless imperfections in the manufacturing process. The errors give rise to the character of the sound, so much so, that many prefer the snapping and crackling noise from records to polished digital audio.
Early sailors had to navigate across oceans using magnetic compasses that were highly prone to fail and notoriously unreliable . Most of the scientific innovations we associate with accuracy and eventual breakthroughs suffered from similar intial flaws: from telescopes that couldn’t focus, thermometers that didn’t accurately gauge temperature, old coat hangers bent into impossible shapes just to catch an over the air TV signal, even the humble land lines many of us grew up with would sizzle and crackle during thunderstorms. Remember those old films where airplane pilots had to tap the analog fuel gauge on a plane to see if the reading was accurate?
We’ve all dealt with modern cell phones dropping calls, GPS trying to take us on lofty adventures through some random field, and many of us came of age trying to blow on the business end of a Nintendo game cartridge with the vain hope of getting a game to work. Did we abandon those technologies for all their very real flaws?
AI doesn’t need to work with precision to find use cases in our lives. That’s not an endorsement of AI—far from it. It is equally true that AI doesn’t need to be integrated into all the apps we use daily, replace search, or be marketed as the universal salve to fix all of our problems. Yet, like many imperfect and problematic technologies, AI isn’t going away.
Given this long history of adapting to flawed but useful technologies, it’s striking how many current responses to AI call for complete rejection rather than thoughtful exploration. Instead of asking how we might develop healthy habits around technologies like AI, some prominent voices are advocating for a complete unplugging from the digital world entirely.
An Unplugged Movement Gains Traction
Some argue the issues with technology go beyond AI and focus instead on the endless hours teens spend online. But it isn’t just teens on their phones—it is all of us sitting in front of an endless screen scrolling through feed after feed and responding to emails that never cease. Our personal and professional lives are largely online now. The arena of ideas is almost entirely digital. We no longer debate about ideas in the village square or read about opinions within the columns of print magazines and newspapers.
Several of the physical books I’ve read about the decline of living in the moment, like Christine Rosen’s The Extinction of Experience: Being Human in a Disembodied World and Shelia Liming’s Hanging Out: The Radical Power of Killing Time , harken back to an era when people lived without constant distraction. But their audience isn’t the young—it’s those of us on the north side of 40 who cling to a nostalgia of the past and yearn for a return to a time when experiences weren’t mitigated via algorithms.
Tyler Austin Harper’s The Question All Colleges Should Ask Themselves About AI positions the university as facing a pivotal choice: either isolate digital technology from learning as much as possible, even removing it from campuses entirely, or give up on the mission of learning entirely. In Harper’s view, institutions have to take a radical stance against AI and figure out how to limit its impact on learning:
Shunning AI use in classrooms is a good start, but schools need to think bigger than that. All institutions of higher education in the United States should be animated by the same basic question: What are the most effective things—even if they sound extreme—that we can do to limit, and ideally abolish, the unauthorized use of AI on campus?
What takes me aback from this position, and even the print books mentioned above, is how much each was shaped out of the ether of digital life. We once might have read those words in the physical pages of The Atlantic, but now we read it on our screens. And if we’re so inclined, we can listen to the included AI-generated podcast provided by the publisher to narrate the very argument to remove technology to save what makes learning human.
An algorithm brought you here, even if you didn’t realize it. You may have heard about this newsletter from a friend of a friend; however, a machine-based process was involved in spreading access to these bits of text to them. There’s arguably been no greater democratizing force than the open internet, and no clear sense if that openness actually exists. Those algorithms that cause ideas to spread, go viral, and worm their way into our daily discourse aren’t neutral features of digital technologies—they are at heart organs of massive corporate interests.
The articles lamenting the end of embodied experiences don’t come from discussion at a coffee shop with friends, at a dinner party, or a casual encounter. Instead, they spread like wildfire with clicks, likes, and reshares through the web of social technologies across our screens. To advocate unplugging completely transcends the medium the very message is relayed upon and doesn’t bother to take into account that doing so would cause many to forgo access to the digital places where people go to discuss their culture and share in how their world is changing.
But it isn’t simply AI. James Marriott’s excellent recent essay The dawn of the post-literate society takes a broad historical view of screens and how digital technologies have impacted reading and critical thinking. Marriott’s points are valid about how screens have corrupted our ability to read and process information. However, as much as I’ve idolized reading in this newsletter as a transformative force for good, I think we should be incredibly cautious about becoming nostalgic for a time of idolized book culture. Mass literacy was no miracle salve for exercising critical thinking in the wake of unending information that gave rise to propaganda and nationalism.
One of Marriott’s claims in particular could be balanced with some deeper historical context: “As you have probably noticed, the world of the screen is going to be much a choppier place than the world of print: more emotional, more angry, more chaotic.” While my initial reaction is to vigorously agree with him, I’m not so sure historians of the French Revolution, the first world war, the rise of Nazi Germany, or those who chronicled the Yellow journalism era that contributed to the Spanish American War would agree that print was somehow less emotional, angry, or chaotic a medium for people to contend with during those times than our current woes with digital misinformation. Print literacy is undoubtedly one of our most important educational concepts we have, but printed text and even the language coded within in it is still an invention, a tool, a process, one ripe for misuse and is not innately good or bad by itself.
We Have to Confront A Changing Word
Now is the time to lean in and have conversations with one another both virtually and in person, and as much as I think many of us would advocate and value a forced digital detox for students, we have to find pathways where we can teach and model healthy habits with screens and without them.
Derek Thompson’s The End of Thinking takes a balanced look at how new technologies alter our interests and offers a much-needed nuanced stance on approaching AI in medical education:
It would be simple if the solution to AI were to simply ignore it, ban the technology on college campuses, and turn every exam into an old-fashioned blue-book test. But AI is not cognitive fentanyl, a substance to be avoided at all costs. Research in medicine has found that ChatGPT and other large language models are better than most doctors at diagnosing rare illnesses. Rejecting such technology would be worse than stubborn foolishness; it would, in real-life cases, amount to fatal incompetence. There is no clear bright line that tells us when to use an LLM and when to leave that tab closed.
The dilemma is clear in medical schools, which are encouraging students to use LLMs, even as conscientious students will have to take care that their skills advance alongside AI rather than atrophy in the presence of the technology.
There’s no reason why we cannot advocate for device free zones on our campuses, create invitational spaces for embodied learning, or show students what it is like to learn without devices. AI might well give rise to universities creating course learning objectives around such embodied learning experiences—a far more reasonable response than outright rejection of an entire class of technology in education. Many faculty are currently using the in-person class time to explore close reading, interpersonal communication, and other human-centric skills:
Mark C. Marino has his writing students explore a unit device free as part of his Analog sandwich approach. Marino’s approach mixes an AI-heavy writing process with an analog, unplugged one. Key to it all is allowing students the agency to decide what approach fits them best in the end.
Helen Choi’s approach in Going Old-School: Professors Use Print Books to Teach AI talks about using close reading to teach students ethical issues about AI through reading Karen Hao’s Empire of AI.
My colleague, Bob Cummings, penned A Digital Writing Researcher Teaches without Digital Tools about teaching a writing course without digital distraction, but understands that students still have the ability to use technology outside of the classroom, including AI.
Another colleague,
, takes a semester-long approach where students opt in to using AI or selecting a path of no AI usage, and helps guide students to maintain a mindful ethos about their process.- ’s amazing reflection over how incorporating something as simple as a spiral notebook can transform a class and create a sense of community serves as a pedagogical buffer against the overuse of screen culture in classes.
All of these pedagogical responses arise out of faculty experiences to AI and technology in the classroom. They are real, lived experiences, and represent a tiny fraction of the astounding work faculty have been forced to do over these past three years in trying to help students navigate a experiment none of us asked for. These are the stories that deserve our attention and serious consideration, but pedagogy is rarely provocative. It hardly goes viral and you won’t see it shared widely or appear in your feed. The work done by faculty becomes invisible in the sea of noise praising and decrying AI. Let’s work on elevating voices that try to move the needle.
I’ll be working with
and Derek Bruff on The Norton Guide to AI-Aware Teaching over the next year to help faculty create courses and shape teaching practices around AI awareness. Annette describes such awareness as “teachers knowing what AI is capable of so we can successfully steer our courses and students to good learning outcomes. Like a lot of technologies, AI can be useful and dangerous at the same time. The more teachers know about how to use it well, or to avoid it when it gets in the way of learning, the better.”





As always, I love the nuance, research and thoughtful analysis you bring to these questions of AI and learning. We need to move past the for / against framing, and the historical perspective is helpful. You don't see a lot of protests against word processing now! Yet it fundamentally changed the way we write. For example: I was recently reading John McPhee's Draft #4, where he talks about arranging his stories in index cards. Does anyone do that anymore? Should they? That's an honest question. We need to evaluate what's useful and potentially destructive about a new technology. And yet, the world marches on as well and we need to be honest about that.
I love the analog sandwich approach! Some units reserved for purely human thinking and writing, specified units for learning about AI and reflecting on it.