The Urgent Need for AI Literacy in Education
We're not going to be able to resist GenAI, if we don't understand it
No one knows how quickly Microsoft, Google, and Amazon will move from beta testing generative AI within their Office and Workplace products into a wide release, but it isn’t hard to foresee the impact this will have across K-12 and Higher Education in the US.
Policies Will Be Ineffective
Most users equate text-based generative systems with ChatGPT, but the days when users must go to a third-party site and enter a prompt are going to end. Beta testers of Google’s generative AI feature within Google Docs are greeted with the feature the moment they open a new doc. What’s more worrisome—there’s no indication of what’s powering such a feature. To an uninformed user, it simply appears as a new feature in the software they’ve trusted for years.
When every user has access to such features with little understanding of what they are or how they function, policies that ban or even try to put guardrails on generative AI’s use aren’t going to work. Worst of all will be those that attempt to enforce such policies on students. We cannot assume faculty or students know anything about generative AI and punishing them for its use or misuse was never going to be a simplistic process. Our standards for academic integrity will need to be rethought now that generative AI is widely available. We need a sustained discourse on our response, not policy statements that will do little more than fill space on a syllabus.
The Rush to Make Stakeholders AI Literate
Everyone, from admin, faculty, staff, and students, will need a solid foundation about the dos and don’ts of using generative AI within education. We’re going to see a massive call for teaching everyone AI literacy. The challenge to education is that our snail-like pace in standing up curriculum for students and training for faculty is hopelessly outpaced compared to how big tech is scaling and deploying generative AI.
Still, we must try. I helped start the AI Summer Institute for Teachers at the University of Mississippi and we held our first round of training on June 1st. I’ve already had to update several training modules because of new announcements, newly deployed features, and new use cases. The time this has taken and will take is staggering. Institutions will need to consider the professional development needs of faculty who are forced to deal with generative AI advances in real-time.
Curriculum designers will likewise need to think about the needs of students to understand the ethical, scientific, and cultural implications the adoption of these tools will have on their learning and greater society. Many of my students in the spring semester openly discussed what they expect the nature of work will be like for their career field, and several wondered if those jobs would exist by the time they graduated. All of those factors will need consideration when we try and teach students about what it means to be AI literate.
Will We Know When We’re Using It?
My near-term fear for the coming year is that generative features will increasingly be viewed not as distinct abilities or even systems, but simply as upgraded features in many of the apps we use daily. I witnessed this during the spring when my students came to class talking about how Snapchat and Tinder both included chat features. The Tinder conversations were hilarious, but the underlying shift in OpenAI’s plugins presence is vast. Khan Academy, Wolfram Alpha, and Quizlet have all adopted generative AI.
As we look farther out, it isn’t hard to imagine most LMSs will likewise integrate generative AI as upgraded features. With the prompt context windows increasing to 100K from Anthropic and 1MK teased from OpenAI, that means writing faculty who grade 80-100 papers at a time will be offered an “upgrade” to have a technology they don’t fully understand provide each student formative feedback on their writing. Such a process may simply default to automatically being turned on.
Chatbots geared toward student success will simply appear one day. Initially geared toward providing simplistic feedback, these bots will soon perform ethically dubious tasks of monitoring students’ mental, physical, and emotional health. They’ll be marketed and sold as 24/7 support to anxious parents worried about their children and also as a digital insurance policy to protect the financial investment in their education.
I don’t think most four-year colleges and universities will rush to adopt generative AI systems, but the online, for-profit colleges and struggling community colleges will be forced to. The ability to provide education cheaply at scale is far too alluring a prospect for struggling institutions, and this has profound implications for our society. A college education has always been a mark of social standing in our society, and one of the chief promises of this technology is that it will allow everyone a low-cost or subsidized pathway to higher education. But the grim truth is this will create a new mark of social standing— were you one of the lucky ones who could afford to attend an institution where a human being worked with you, or did you go to a school where teaching was outsourced to chatbot?
Our path forward is to arm ourselves with knowledge, enter conversations with stakeholders, and advocate for the importance of human interaction. This advocacy doesn’t need to take an all-or-nothing approach to generative AI in education; however, it does mean we need a foundational understanding of AI literacy before we can explore or attempt to mitigate this technology’s impact on teaching and learning.
I don't know about this: "The challenge to education is that our snail-like pace in standing up curriculum for students and training for faculty is hopelessly outpaced compared to how big tech is scaling and deploying generative AI." Won't AI design the curriculum and training for us? I view the problem as institutional inertia - local, State, regional - although there remains a lingering disdain for computer science in academia that is also a barrier.
Absolutely agreed this is an essential conversation to be having. We held a summit for high school, community college, and college educators here at the University of Kansas focused on critical AI literacy--also June 1. People were hungry for a space to think and talk about it.