Many in Silicon Valley are obsessed with sci-fi warnings of terminator-style AI destroying humankind. X-risk has become the shiny new toy that's distracted attention from current problems. Over the past week, several members of OpenAI’s board were seemingly willing to burn down their company because they believed that Sam Altman was moving to fast toward building actual AI, not just a chatbot. It's like forecasting your toddler could someday join the NBA and pouring your time and money into making that happen instead of teaching them to tie their shoes. Society is facing issues right now stemming from current AI models that are already more capable than any programs in history. But few appear to be bothered with the educational equivalents of shoe-tying and potty training when they've got their sights set on playing god and building a true thinking machine.
Racing Toward a Fantasy Instead of Facing Our Reality
Education has been reeling since ChatGPT’s launch last year and we could desperately use a pause to take stock of where we are now and what this new technology means for teaching and learning. This would give educators time to revisit assessments, develop communities of practice about teaching with (or against) generative AI, and weigh the benefits against the considerable costs of uncritical adoption of the technology by the general public.
We need a coherent national strategy to educate all stakeholders in AI literacy. Such an initiative will take years to coalesce and likely many more to enact in any coherent, funded manner. The snail’s pace of the establishment’s response to new technological change isn’t likely to improve anytime soon, but at least some of us can use the brief downturn in deployments to advocate for funding programs and help prepare faculty for the next series of blows that’s sure to follow once the other tech companies catch up.
One thing from OpenAI board debacle that is clear—the myopic focus of many in focusing solely on the longterm future over safety in current generative AI systems is one of the most destructive forms of hubris we’ve seen. This mindset allows adherents to ignore many current issue posed by current generative AI products because it doesn’t compare to the awesome power and dread posed by so-called Artificial General Intelligence, a.k.a the science fiction AI that Hollywood likes to depict destroying humanity.
I don’t care how much time and money billionaires spend chasing the fantasy of AGI, and at this point, it is just that—a fantasy. No serious computer scientist believes that we will achieve AGI with current transformer-based systems. Yet, this notion of quite literally playing God has gripped a huge swath of Silicone Valley. The hubris is so deep that many who are frightened about creating such an advanced technology believe they must be the ones to do so because they and only they are ethical enough to ensure such systems are aligned properly in such a way to serve humanity, not destroy it. The same hubris drives the public release of these tools, all of which are still under the experimental moniker. The reasoning—to understand how advanced AI systems will impact society, they need to be tested in real-time by the general public. That’s never sounded reasonable to me.
What should upset everyone isn’t just the hubris—it’s the complete dismissal of what Doomers call near-term harms in favor of the aforementioned fantasy. If you are worried about AI influencing elections, spreading mass disinformation, leading to unchecked bias, waves of nonconsensual deep fakes, the automation of knowledge work, or the deskilling of students, then your concerns aren’t going to rise to many of these folk’s interest. You see, each item is just a distraction taking focus and resources away from the problem posed by super-intelligent machines that could kill us with ease.
Focusing only on the potential long-term impact of a yet-to-be-invented technology while ignoring its current harms is offloading moral responsibility for a future that may never come. Take for example the nuclear annihilation many of us grew up dreading during the Cold War that ultimately did not come to pass.
When Armageddon Was Postponed
I grew up in central Missouri in the 1980s, among one of the largest deployments of Minutemen III silos in the nation. The logic was that putting nuclear missiles in the dead center of the country would give the military maximum time to respond and successfully launch them if the Soviet Union ever decided to attack. I grew up three miles away from silo E10 on the map below. I had no idea the tiny fenced-off area I could see from Route Y when we drove past held an intercontinental ballistic missile with multiple warheads. We did not practice duck and cover drills in my school because there would be no point in doing so—we understood that we would be wiped out in the first wave of an attack or counter-attack.
It wasn’t until I watched my small hometown of Sedalia be completely wiped out in The Day After that the full gravity of the situation truly struck me. Seeing my tiny world annihilated on screen was one of the most horrific and lasting realizations that we were doomed as a society. Yet that future didn’t come to pass. The START treaties happened and I finished growing up in a world where those missiles were removed, their silos imploded. When you drive by E10 you see the fenced outline marking the border of the silo as the only sign it was ever there. Bales of hay are stacked in rows above the very spot that used to house one of the world’s deadliest weapons.
The reasons why nuclear Armageddon was canceled were far too complex and fluid for any type of predictive model. It turns out human behavior isn’t always that simple to model or forecast. We spent so much energy and time imagining the end of the world during the Cold War that we ignored or paid lip service to real-world harms we were aware of like climate change.
Let’s not make the same mistake now that so many in Silicon Valley have become obsessed with a Hollywood fantasy of AI.
Why We Must Address Today Before Tomorrow
The obsession in Silicon Valley with chasing the fantasy of artificial general intelligence is causing many to ignore the real and pressing issues posed by current AI systems in education and beyond. Generative models offer users what Annette Vee aptly terms as a utopian fantasy where everyone gets to be a manager. This is itself a dubious sort of day dream, one many ignore in favor of embracing x-risk. As Vee argues:
AI is already out of our control when we sit down at the ChatGPT terminal or the Midjourney Discord. Generative AI is stochastic, which means it’s both statistical and random. Sure, the most accessible models are trained with human feedback and guardrailed. But we can’t really know what the AI will generate in any given interaction. This makes AI different from most other computational technologies, where the computer does exactly what it’s told. Even as individual users, we can’t control AI outputs. Can we manage it, like the nice Microsoft man suggests above? Can we lead AI? I don’t know.
For me, this is the current real-world danger posed by generative AI—it not only elicits fantasies of control in the user, it also causes those who try to institute guardrails to confront their own fantasy of control. So many in education immediately pivoted to AI detection or other surveillance just to reassert control over what they valued to be faced with a black box’s random predictions over a student cheating. Perhaps the wisest path forward is to proceed with humility rather than hubris.
Rather than seeking to tightly regulate or restrict generative models in reactionary ways, we should foster broad literacy around their capabilities and limitations. Students and educators alike need opportunities to critically examine AI-generated content, unpacking how it works and when it fails. Assessment methods should shift from simplistic detection to rewarding original and ethical integrations of these tools.
And for those dazzled by the promise of future AGIs that transcend unpredictability, patience and perspective are warranted. We cannot afford to ignore present realities while chasing far-off visions of superintelligent utopias or doomsday scenarios. The emerging impacts on education demand our attention now. If the leaders in tech stepped out of their echo chambers of profit and fantasies of doom, students and teachers will tell them how they need guidance and support today -- not in some distant tomorrow that may never arrive.
This is a very thought-provoking article, Marc. I think the reason this is happening is because it feels empowering. Visualizing a better tomorrow creates an illusion of predictability and control amidst the uncertainty of life. Research shows perceived control reduces stress and anxiety. Humans fantasize to get an artificial feeling of influence over uncontrollable events. And as Morgan Housel once said, "People don't want the truth. They want to reduce the uncertainty that is in their heads."
To solve this pressing issue, in my opinion, we need to start rethinking everything we know about education today because AI's influence will only continue changing the landscape. We need to rethink assessments, content, delivery, everything. This undertaking requires open-minded discussion questioning assumptions on everything from content divisions to evaluation models in light of AI influence. Stakeholders from all around the world need to give a new meaning to education as we move forward. One that perfectly aligns with a world of AI.