A couple of observations / reactions to Marc's post. 1) My sense is that most teachers have indeed moved on, whether for the reasons Marc said or for other, more pressing issues; 2) I'm not sure the free upgrade to the most powerful model is going to change the needle, either for teachers or students - many have not necessarily checked out, but they may not immediately see the improvement in the more advanced models which high frequency users take for granted; 3) administrators are mostly clueless and the tech folks who may be pushing for the kinds of regulatory concerns mentioned generally don't have the power or influence to put this on their plates; 4) the horse is already out of the barn and I fear it's already too late for most teachers to catch up given the ubiquity of products and speed with which change is occurring - the changes demoed by OpenAI yesterday were likely ignored by all but the most interested and curious educators; 5) perhaps this is the saving grace, but most students are clueless as well - on the plus side, I did an informal survey (you can view a copy of the survey here - https://docs.google.com/forms/d/e/1FAIpQLSe8Bz6ui--7LEkP7L3w3QSPYoAtS4rqZI56YDiPRL2yqW4P1Q/viewform?usp=sf_link) of our HS students and over 80 percent recognized that AI can take away from their learning experience. Only 1/3 claimed they have used it and most (more than 50%) said they did not use because it was against the rules or they were ethically opposed (60/40 split on those two issues). Granted, this is one small survey from an independent school and who knows how honest they were being, but the responses align with what I have been seeing, I would love to see more data across a wider spectrum of schools and students. Bottom line is I'm not sure I see a major change in use this fall barring even more advanced models emerging but I don't think that will do it either. One teacher recently emailed me whether AI was just a fad or she had to learn it. I fear that's the norm.
The useful information and knowledge being provided by the AI probably does not cover all the matters and details that the teacher is providing, so for the present, this personal method of teaching is still important.
In future when AI is more resourceful in what it can supply, the students will need to replace their gained knowledge about the subject of their study with a greater ability to operate the AI medium itself, and to use it for getting better knowledge. This strikes me as being at least as difficult as straight cources of their subject, because of how computer management is taught and how the limited power of AI can respond.
For example, when I ask a question, the Chatbot usually replies about the subject first appearing in my question, but pays little regard to the question itself!
Spot-on diagnosis, in my humble opinion. In regard to your question (Why aren't we...), my hunch has a simple explanation: for most teachers, this generative AI tech is a sudden overwhelm that prompts (put intended) anxiety, uncertainty, exhilaration, fear and assorted emotions but also--from an analytical and reasonable perspective--an attitude of "too early to tell". Often the best teachers are old school, they are not techno-optimists. They ... teach. For them, there is a skepticism. I think your concern is warranted because (i) we cannot ultimately trust the tech vendors: they will build and sell without high regard for consequence and (ii) the customers, er students, are clever and will use the tools, often faster than the teachers. My vote for the culprit here is, speed. We can't keep up with the models. Nobody can "get ahead" of this. I think it is mistake to underestimate the potential consequences of the disruption that will unfold.
I understand the worry about our education with this rapid transformation coming into school. The potential and the consequences of this advanced technology are serious but we shouldn't underestimate the damage to students' skills beyond hard skills.
Very ambitious idea of making the latest model accessible for everyone, but I don't think OpenAI understand that for the majority of students, the education systems optimized for standardized tests incentivizes them to use AI to off load their work instead of using it as a tutor. The people using Khan Academy is not the average American high schooler.
Some negative order consequences I got from Claude 3.
1. Overreliance on AI: Students might become overly dependent on ChatGPT-4 for completing assignments, which can hinder the development of essential skills like critical thinking and problem-solving. This overreliance can lead to a decrease in students' ability to perform tasks independently and think creatively.
2. Compromised Academic Integrity: The ease of generating content with ChatGPT-4 can lead to increased instances of plagiarism and cheating. Students might submit AI-generated work as their own, which violates academic honesty principles and devalues genuine learning and effort.
3. Reduced Critical Thinking Skills: Relying on AI for answers can prevent students from engaging deeply with the material. This can result in a superficial understanding of subjects and a lack of critical analysis skills, which are crucial for academic and professional success.
You make a great point early on that students’ over reliance on it runs the risk of bypassing critical learning processes. There are certainly benefits to be gained through productive struggle
A couple of observations / reactions to Marc's post. 1) My sense is that most teachers have indeed moved on, whether for the reasons Marc said or for other, more pressing issues; 2) I'm not sure the free upgrade to the most powerful model is going to change the needle, either for teachers or students - many have not necessarily checked out, but they may not immediately see the improvement in the more advanced models which high frequency users take for granted; 3) administrators are mostly clueless and the tech folks who may be pushing for the kinds of regulatory concerns mentioned generally don't have the power or influence to put this on their plates; 4) the horse is already out of the barn and I fear it's already too late for most teachers to catch up given the ubiquity of products and speed with which change is occurring - the changes demoed by OpenAI yesterday were likely ignored by all but the most interested and curious educators; 5) perhaps this is the saving grace, but most students are clueless as well - on the plus side, I did an informal survey (you can view a copy of the survey here - https://docs.google.com/forms/d/e/1FAIpQLSe8Bz6ui--7LEkP7L3w3QSPYoAtS4rqZI56YDiPRL2yqW4P1Q/viewform?usp=sf_link) of our HS students and over 80 percent recognized that AI can take away from their learning experience. Only 1/3 claimed they have used it and most (more than 50%) said they did not use because it was against the rules or they were ethically opposed (60/40 split on those two issues). Granted, this is one small survey from an independent school and who knows how honest they were being, but the responses align with what I have been seeing, I would love to see more data across a wider spectrum of schools and students. Bottom line is I'm not sure I see a major change in use this fall barring even more advanced models emerging but I don't think that will do it either. One teacher recently emailed me whether AI was just a fad or she had to learn it. I fear that's the norm.
The useful information and knowledge being provided by the AI probably does not cover all the matters and details that the teacher is providing, so for the present, this personal method of teaching is still important.
In future when AI is more resourceful in what it can supply, the students will need to replace their gained knowledge about the subject of their study with a greater ability to operate the AI medium itself, and to use it for getting better knowledge. This strikes me as being at least as difficult as straight cources of their subject, because of how computer management is taught and how the limited power of AI can respond.
For example, when I ask a question, the Chatbot usually replies about the subject first appearing in my question, but pays little regard to the question itself!
Spot-on diagnosis, in my humble opinion. In regard to your question (Why aren't we...), my hunch has a simple explanation: for most teachers, this generative AI tech is a sudden overwhelm that prompts (put intended) anxiety, uncertainty, exhilaration, fear and assorted emotions but also--from an analytical and reasonable perspective--an attitude of "too early to tell". Often the best teachers are old school, they are not techno-optimists. They ... teach. For them, there is a skepticism. I think your concern is warranted because (i) we cannot ultimately trust the tech vendors: they will build and sell without high regard for consequence and (ii) the customers, er students, are clever and will use the tools, often faster than the teachers. My vote for the culprit here is, speed. We can't keep up with the models. Nobody can "get ahead" of this. I think it is mistake to underestimate the potential consequences of the disruption that will unfold.
I understand the worry about our education with this rapid transformation coming into school. The potential and the consequences of this advanced technology are serious but we shouldn't underestimate the damage to students' skills beyond hard skills.
Very ambitious idea of making the latest model accessible for everyone, but I don't think OpenAI understand that for the majority of students, the education systems optimized for standardized tests incentivizes them to use AI to off load their work instead of using it as a tutor. The people using Khan Academy is not the average American high schooler.
Some negative order consequences I got from Claude 3.
1. Overreliance on AI: Students might become overly dependent on ChatGPT-4 for completing assignments, which can hinder the development of essential skills like critical thinking and problem-solving. This overreliance can lead to a decrease in students' ability to perform tasks independently and think creatively.
2. Compromised Academic Integrity: The ease of generating content with ChatGPT-4 can lead to increased instances of plagiarism and cheating. Students might submit AI-generated work as their own, which violates academic honesty principles and devalues genuine learning and effort.
3. Reduced Critical Thinking Skills: Relying on AI for answers can prevent students from engaging deeply with the material. This can result in a superficial understanding of subjects and a lack of critical analysis skills, which are crucial for academic and professional success.
You make a great point early on that students’ over reliance on it runs the risk of bypassing critical learning processes. There are certainly benefits to be gained through productive struggle
Great post and I could'nt agree more.
What do you feel are some of the obstacles getting in the way of good regulation for AI in education? And how might we overcome those obstacles?
Curious to hear your thoughts...