This fall, our students will have free access to OpenAI's most powerful model each time they use ChatGPT. Education isn't prepared for students who can't discern when to use this much improved version of ChatGPT to aid their learning versus uncritically adopting an AI tool to offload the process. We need to be very real about where this is headed. Far too many educators have checked out of the AI discourse. They’ve played around with the free version of ChatGPT and were left unimpressed and unconcerned about AI’s impact on how they taught. That time is coming to an abrupt end and we need to help prepare our colleagues for the fall.
I’m trying to chart a pathway now in the Beyond ChatGPT series of this newsletter, showing people how students are being marketed use cases for AI to help them read, take notes, get feedback, tutor them, research, and of course, write. The influencers selling these AI use cases are directly marketing generative AI to our students. So, too, is OpenAI from their latest demo.
AI has changed learning. That much is undeniable. What is also undeniable is how many within education fail to acknowledge it. We must invite these people off the sidelines back into the discourse. I think the best method to do so is by increasingly asking ourselves, our institutions, and the developers of AI tools—does your tool/ use case meet existing regulations for Edtech?Â
OpenAI is Marketing GPT-4o to Students
OpenAI’s announcement included a demonstration of GPT-4o solving math problems, offering feedback, and talking through problems—skills educators strive to teach. The demo had an underlying theme that went understated throughout. Our technology was good before and we just made it better and even more friction-free for you to use.
While GPT-4o has the potential to act as a personal tutor, guiding students through problems, it also poses the risk of students relying too heavily on it, bypassing critical learning processes. But we don’t hear any of this from OpenAI. At one point during the math question portion of the demo, the speaker instructed the AI to help him solve a math problem and had to stop it from solving it for him. Do we actually believe students will show that type of restraint when prompting a model? Much of this demo is a sales pitch geared directly at the education sector and students are the chief consumers.Â
From OpenAI’s perspective it is good business getting students to use these tools. Students who adopt AI now in our classrooms will go out into the workforce and continue to use it within their jobs. It creates a pipeline effect, a generational user base that will likely view AI as a trustworthy and helpful tool.
This acceleration of deploying the most advanced AI right into the hands of the bewildered public has downstream consequences no one can imagine. Yet here we are once again, ripping up slide decks and revamping faculty development programming to try and keep pace with the most advanced technology we may have ever witnessed being released upon society with no plan, no course of action, no real understanding of what an innovation like this is doing now to learning and education let alone will do in the future.Â
I’ve said it before and will again—we are in a grand public experiment with AI no one asked for. Quite a few of us are going to great lengths to ignore the reality of how quickly our world and interactions are now exposed to automation. With this latest release, I don’t see that position being tenable any longer. Education is being ushered into this new generative era whether we like it or not and we can either take a position demanding ethical and transparent behavior from developers and adopters or risk being ushered aside in favor of sweeping technological change.
Generative AI Should Be Regulated, Just Like All Other Edtech
TeachAI released their AI Guidance for Schools Toolkit last year under a CC BY-NC-SA 4.0 license and arguably the most impressive area of their tool kit is a thoughtful examination of existing regulations that generative AI tools fall under. I’ve copied it below:Â
   Current regulations relevant to the use of AI in education
United States
FERPA - AI systems must protect the privacy of student education records and comply with parental consent requirements. Data must remain within the direct control of the educational institution.
COPPAÂ - AI chatbots, personalized learning platforms, and other technologies collecting personal information and user data on children under 13 must require parental consent.
IDEAÂ - AI must not be implemented in a way that denies disabled students equal access to education opportunities.
CIPAÂ - Schools must ensure AI content filters align with CIPA protections against harmful content.
Section 504 - The section of the Rehabilitation Act applies to both physical and digital environments. Schools must ensure that their digital content and technologies are accessible to students with disabilities.
Educators should advocate for policies that ensure AI tools comply with existing regulations, and this absolutely includes the newly updated ChatGPT. Parents need to discuss these regulations with school boards, and institutions should include these considerations in their faculty development programs.Â
Why Is No One Talking About Regulating AI in School?
There are so many unknown issues with OpenAI’s newly announced free-to-use GPT-4o model. We know from the demo that it will have multimodal capabilities like real-time audio and video that would make me wonder how deploying that into a classroom complies with FERPA’s privacy protection. I also wonder how removing account restrictions for free users of ChatGPT squares with COPPA and the required consent needed for children under 13? I’m betting many schools have no idea if an AI tool is compliant under IDEA or a student’s 504 plan until an issue arises.Â
It’s clear to me that many of the ways students and teachers use free AI tools, like ChatGPT, may not be compliant with existing regulations in K-16. It’s up to us to make noise, raise questions, and advocate for ourselves and our students. Educators should have a place at the policy table to advise developers of foundation models on better ways of limiting access for students to protect vital skills from automation. The risks are simply too high to say nothing or sit on the sidelines. The future is here and it won’t wait for us to come to a consensus about best practices or idealized scenarios. Far too many of our students are uncritically adopting this technology now, what will the next three to five years look like if we continue on the path we are on?Â
A couple of observations / reactions to Marc's post. 1) My sense is that most teachers have indeed moved on, whether for the reasons Marc said or for other, more pressing issues; 2) I'm not sure the free upgrade to the most powerful model is going to change the needle, either for teachers or students - many have not necessarily checked out, but they may not immediately see the improvement in the more advanced models which high frequency users take for granted; 3) administrators are mostly clueless and the tech folks who may be pushing for the kinds of regulatory concerns mentioned generally don't have the power or influence to put this on their plates; 4) the horse is already out of the barn and I fear it's already too late for most teachers to catch up given the ubiquity of products and speed with which change is occurring - the changes demoed by OpenAI yesterday were likely ignored by all but the most interested and curious educators; 5) perhaps this is the saving grace, but most students are clueless as well - on the plus side, I did an informal survey (you can view a copy of the survey here - https://docs.google.com/forms/d/e/1FAIpQLSe8Bz6ui--7LEkP7L3w3QSPYoAtS4rqZI56YDiPRL2yqW4P1Q/viewform?usp=sf_link) of our HS students and over 80 percent recognized that AI can take away from their learning experience. Only 1/3 claimed they have used it and most (more than 50%) said they did not use because it was against the rules or they were ethically opposed (60/40 split on those two issues). Granted, this is one small survey from an independent school and who knows how honest they were being, but the responses align with what I have been seeing, I would love to see more data across a wider spectrum of schools and students. Bottom line is I'm not sure I see a major change in use this fall barring even more advanced models emerging but I don't think that will do it either. One teacher recently emailed me whether AI was just a fad or she had to learn it. I fear that's the norm.
The useful information and knowledge being provided by the AI probably does not cover all the matters and details that the teacher is providing, so for the present, this personal method of teaching is still important.
In future when AI is more resourceful in what it can supply, the students will need to replace their gained knowledge about the subject of their study with a greater ability to operate the AI medium itself, and to use it for getting better knowledge. This strikes me as being at least as difficult as straight cources of their subject, because of how computer management is taught and how the limited power of AI can respond.
For example, when I ask a question, the Chatbot usually replies about the subject first appearing in my question, but pays little regard to the question itself!