Chat's big value is helping workers who want to remove repetition or cognitive labor from ongoing tasks that require their higher order thinking skills but might be pattern-based and thus supplemented by an inference machine. I use it a lot for that, and shaving chunks of time off tasks is a game changer.
Education is collaborative, slow and cognitively demanding. It requires reflection, and benefits greatly from interpersonal relations, and memory gains weight from emotionally valued experiences, best done with others.
Unfortunately, this guide posits the student work experience as something to be sped up and outsourced. "Delegate citation grunt work to ChatGPT" is the very first sentence. The tone is immediately one of "your learning tasks are laborious, and beneath you." You are being denied the fun stuff in learning, because your instructors are meanies.
Also, NONE of this encourages students to interact with other people. We know learning is a social activity, done with others and this student guide suggests replacing peers and an instructor. Other people are totally and intentionally removed from each of these steps. And that is counter to millennia of human learning patterns and educational research.
You raise a good point about the collaborative nature of learning; I hadn't considered this risk of chatbots. It's much easier to just pull up ChatGPT (especially with Advanced Voice Mode) than find a time to meet with your study buddy/group. There's ways that LLMs could be a *complement* to collaborative study groups, but the current solutions do the opposite.
I love the idea of AI as an always-on resource for enabling and rewarding independent study, but this strength is currently coming at an extraordinary cost.
Instead of releasing a guide, OpenAI could have easily shipped a new model selector (GPT-4o for students) which would literally just be the existing model with a student-specific system prompt. This doc feels more like PR than any substantive move to improve student usage of LLMs. OpenAI completely controls the interface, they don't need to rely on casual suggestions on a totally separate web page that students will likely never see.
First off, I appreciate you digging into the world of Talkie so I don't have to. I occasionally think I should go exploring in the weird wide world of anthorporphized LLMs but I don't last long.
It seems to me that the only people moving slower than professors in wrapping their heads around AI in education is OpenAI. Two years too late, and several apples short of a full barrel. Nice that they are telling students that AI is not a shortcut, but a tool to support learning. Not sure any professor will be convinced by a few links in a bibliography.
I suppose you can't blame them too much for anthropomorphizing AI, given how much they have riding on better than human intelligence in the next few thousand hours. Still, this does not seem like much of an effort to guide students.
If only there was hundred of millions of dollars put into existential risk. But this is where we honestly have a lot in common, because AI persuasion and risk is what harms people near term and leads to existential risk
How good is ChatGPT at identifying uncited paraphrasing? Can ChatGPT provide feedback like a professional writing tutor, without telling students how to think about a topic? If you were to take a student paper with formatting issues and feed it to ChatGPT, will it accurately identify all of those problems? I'm not going anywhere with this - those are just questions I don't currently have time to explore myself.
This document does two things I find problematic.
Chat's big value is helping workers who want to remove repetition or cognitive labor from ongoing tasks that require their higher order thinking skills but might be pattern-based and thus supplemented by an inference machine. I use it a lot for that, and shaving chunks of time off tasks is a game changer.
Education is collaborative, slow and cognitively demanding. It requires reflection, and benefits greatly from interpersonal relations, and memory gains weight from emotionally valued experiences, best done with others.
Unfortunately, this guide posits the student work experience as something to be sped up and outsourced. "Delegate citation grunt work to ChatGPT" is the very first sentence. The tone is immediately one of "your learning tasks are laborious, and beneath you." You are being denied the fun stuff in learning, because your instructors are meanies.
Also, NONE of this encourages students to interact with other people. We know learning is a social activity, done with others and this student guide suggests replacing peers and an instructor. Other people are totally and intentionally removed from each of these steps. And that is counter to millennia of human learning patterns and educational research.
You raise a good point about the collaborative nature of learning; I hadn't considered this risk of chatbots. It's much easier to just pull up ChatGPT (especially with Advanced Voice Mode) than find a time to meet with your study buddy/group. There's ways that LLMs could be a *complement* to collaborative study groups, but the current solutions do the opposite.
I love the idea of AI as an always-on resource for enabling and rewarding independent study, but this strength is currently coming at an extraordinary cost.
Instead of releasing a guide, OpenAI could have easily shipped a new model selector (GPT-4o for students) which would literally just be the existing model with a student-specific system prompt. This doc feels more like PR than any substantive move to improve student usage of LLMs. OpenAI completely controls the interface, they don't need to rely on casual suggestions on a totally separate web page that students will likely never see.
First off, I appreciate you digging into the world of Talkie so I don't have to. I occasionally think I should go exploring in the weird wide world of anthorporphized LLMs but I don't last long.
It seems to me that the only people moving slower than professors in wrapping their heads around AI in education is OpenAI. Two years too late, and several apples short of a full barrel. Nice that they are telling students that AI is not a shortcut, but a tool to support learning. Not sure any professor will be convinced by a few links in a bibliography.
I suppose you can't blame them too much for anthropomorphizing AI, given how much they have riding on better than human intelligence in the next few thousand hours. Still, this does not seem like much of an effort to guide students.
If only there was hundred of millions of dollars put into existential risk. But this is where we honestly have a lot in common, because AI persuasion and risk is what harms people near term and leads to existential risk
How good is ChatGPT at identifying uncited paraphrasing? Can ChatGPT provide feedback like a professional writing tutor, without telling students how to think about a topic? If you were to take a student paper with formatting issues and feed it to ChatGPT, will it accurately identify all of those problems? I'm not going anywhere with this - those are just questions I don't currently have time to explore myself.