Our Era of Uncertainty: What GPT-4 Means for Education
ChatGPT was released four months ago and now its multimodal successor is here.
Are you excited about GPT-4? Eager to try it out? I’m not. Text-based generative AI like ChatGPT and GPT-4 have the potential to redefine what it means to be literate in our new digital age. Those who adapt quickly to this new technology will have a better chance of success in their future careers than those who lag behind. However, this will require massive resources and fundamental structural changes in higher education.
The arrival of GPT-4 feels like a gift no one asked for. I cannot fathom a reason or use case for releasing an even more capable model in such a short span of time. It’s clear OpenAI and Microsoft want to saturate the market before Google’s deployment of their own GAI. GPT-4's multimodal capabilities mean that it can likely analyze images, music, video, code all in one interface. Users can work with 25,000 words of text at a time within GPT-4's interface, and book-length capability isn't out of reach.
OpenAI tried to redteam GPT-4, engaging experts on the potential risks the model poses and they are unsurprisingly terrible. Regini Reni’s tweet thread highlights many from OpenAI’s GPT-4 System Card.
Generating personalized propaganda and creating upheaval in our political systems
Making the construction of weapons of mass destruction more feasible and accessible
Disrupting our banking system
And it is tuned to create trust in users
These are just the risks a small team of experts brainstormed. It’s likely impossible to foresee all the major risks or potential benefits in a single paper.
What this means for Education
My experience is that the vast majority of faculty and students are unaware or indifferent to generative AI, but those who are paying attention see the risk of failing to adapt. We are heading into an uncertain future where stable jobs and careers are facing potential upheaval like never before.
It’s highly likely that traditional university education and faculty jobs are high on the list of fields that will see continuous disruption by generative AI. Faculty need stability and calm to explore and understand how generative AI will impact their discipline and their students. We need training, but what is the point of such training when systems keep changing? Higher education isn't built for this type of continuous disruption and that means that we will always struggle to keep pace with generative AI.
The best we can do is discuss how this technology is changing what it means to work and learn. Eventually, these discussions will lead to training that I imagine will try and answer some of the following questions:
What resources will we need?
What skills will students require?
What will future employers from skilled workers?
How will this impact my discipline?
How do I create boundaries with my students?
How do I continue to teach students to think and learn and not offload those skills to an AI?
Who is going to pay for the immense amount of labor required to train people?
These questions pose significant challenges for higher education attempting any type of training, especially when big tech keeps altering the capabilities of these systems in a matter of months. The ultimate frustration is that the questions that will drive the first wave of training may very well change a year or two from now, leading to an arms race of continually upskilling our faculty and staff out of fear that failing to keep up means we aren’t preparing ourselves or our students for the future.