Spotting Bullshit in the Era of GenAI
There's too Much Nonsense on the Web Now, How Do We Prepare for GenAI?
I often fantasize about teaching a course about our cultural moment, and boy are we in one now. GenAI’s impact may be far reaching, challenging core concepts about what it means to write, to read, and “be” within our digital realms. It can do all of this, and it can also absolutely go away with the right lawsuit or piece of legislation. Our moment is that precarious. If I could teach a course about all of this, I’d make it a combination of Calling Bullshit, Digital Gardening, and community engagement about GenAI. I’d ask students to take a deep dive into how technology is shaping their IRLs and make them pause to consider what this change means to them, and I get the strong sense few people are asking them these questions. What we’re seeing now with GenAI’s rapid deployment and adoption is no less a surrender of cultural forces to technology, what Neil Postman coined as Technopoly nearly three decades ago. Indeed, our current culture was obsessed with generating content, often without purpose, long before GenAI.
You can get a six-figure job now as a digital “prompt whisperer” and you don’t even need a background in computer science—wordplay and curiosity matter more than your ability to code. Hype is certainly fueling much of this, but we’re starting see early studies showing that workers enjoy using GenAI to replace simple, mundane tasks, like sending emails. What does this mean for students today? There’s a coming divide between those who possess the skills needed to prompt a creative and iterative process using GenAI and those who cannot.
Educators as Seekers of Meaning
Ethan Mollick’s substack is a roadmap about why this argument matters. One of his recent posts discusses how teachers are squarely in the midst of this technological shift and calls for teachers to become Edtech designers using GenAI. He uses Bing’s Chatbot as an example to design middle school assignments for science, including activities, and quizzes, and even asks Bing to generate code for a simple game about molecules he then runs on a Python notebook. That’s right, Mollick used GenAI to create an interactive learning experience using his wits and free software. His point: users are now capable of creating and curating their own learning experiences for their students without having to outsource that service to Edtech companies and this can democratize education. Put another way, a teacher can manipulate technology to create meaningful moments of learning for students without middlemen mucking it up.
The promise of GenAI in this context is amazing. Who wouldn’t love to pull together assignments for their students simpatico to their teaching styles, aligning them with learning outcomes that are personalized to each student and each instructor?
But prompting like this requires nurturing those creative impulses, developing a sense of play, and being able to spot bias, hallucinations, and errors in real-time. We can teach students those skills and many of them will go on to use them in their careers and professional lives to good effect; however, what stops those now skilled actors from “prompting bad” and becoming bullshit generators, creating misinformation, propaganda, harassment, viruses, or adding to the noise that makes up so much of our digital lives online?
If we’re going to teach students to use GenAI, then we need to also teach them the ethical responsibility wielding this technology requires. We are all stewards of the public places we haunt, and at some point, we’re going to have to explain to students the digital equivalent of “do not litter.” Don’t add to the misery of others by using this technology to create a shitty experience for other human beings.
You Can Teach Someone To Spot Bullshit
A call for ethics is one thing, but we’re also going to need to help students spot bullshit in real time because we’re wading in it now and will be drowning in generative BS by this time next year. Part of this involves taking a deep dive into how LLMs like Bing’s Chatbot and ChatGPT are trained, but I doubt we’re going to get far down that road. Rather, let’s aim to impart a healthy skepticism in our students to help prepare them for a world of convincingly worded and imagined BS.
The internet is a vivid experience. Even in the earliest days of its invention, Barry Hannah opinioned that we were living in an age where the tyranny of the visual overwhelms us, a bewildering juxtaposition to Joyce’s “thought tormented age” nearly a century before. Our digital reality is often a mind-numbing experience of text, video, images, scrolling through countless sites and apps. We are in an age where companies code such experiences for maximum engagement, an era where life lived as Hannah notes “is too vivid for thought.” That’s why it’s so important to cultivate a sense of questioning, of doubting, as a form of critical thinking.
Doubting thoughtfully is a skill we’ve beaten out of ourselves and our students by signaling compliance as a virtue. Debate, both online and in real life, has taken a backseat to siloed positions we banter back to one another with closed ears. Bullshit is apolitical. It doesn't matter if you are on the left or the right, you and especially your students, will need to spot bullshit in all of its digital forms.
What bullshit, you say? Imagine all the existing crap from spam to phishing we’ve come to cringe at and then feed it steroids. We’re going to see personalized email scams tailored to each of us, and horrifying misinformation campaigns. Deepfakes are going to become the norm, not just for celebrities or politicians, but for kids in school. We’re already seeing reports about students creating nonconsensual deep fakes of each other in classrooms on their smartphones. This needs to be regulated and fast, but in the interim, we’re going to have to teach students about digital consent, ethics, and how not to treat one another like jack assesses with GenAI. That’s going to take time and teachers being teachers, but for them to do their jobs effectively, they need to understand GenAI, warts and all.
When Neil Postman talks about Technopoly, he’s arguing that “new technologies alter the structure of our interests: the things we think about. They alter the character of our symbols: the things we think with. And they alter the nature of community: the arena in which thoughts develop.” These changes happen rapidly and the best way to process what’s going on is to slow things down and talk to all the stakeholders about how our world is changing and in doing so take back some level of control. As Postman so thoughtfully predicted decades ago “something has happened in America that is strange and dangerous, and there is only a dull and even stupid awareness of” how innovative technologies, like GenAI, change the fabric of our lives.