7 Comments

This document does two things I find problematic.

Chat's big value is helping workers who want to remove repetition or cognitive labor from ongoing tasks that require their higher order thinking skills but might be pattern-based and thus supplemented by an inference machine. I use it a lot for that, and shaving chunks of time off tasks is a game changer.

Education is collaborative, slow and cognitively demanding. It requires reflection, and benefits greatly from interpersonal relations, and memory gains weight from emotionally valued experiences, best done with others.

Unfortunately, this guide posits the student work experience as something to be sped up and outsourced. "Delegate citation grunt work to ChatGPT" is the very first sentence. The tone is immediately one of "your learning tasks are laborious, and beneath you." You are being denied the fun stuff in learning, because your instructors are meanies.

Also, NONE of this encourages students to interact with other people. We know learning is a social activity, done with others and this student guide suggests replacing peers and an instructor. Other people are totally and intentionally removed from each of these steps. And that is counter to millennia of human learning patterns and educational research.

Expand full comment

You raise a good point about the collaborative nature of learning; I hadn't considered this risk of chatbots. It's much easier to just pull up ChatGPT (especially with Advanced Voice Mode) than find a time to meet with your study buddy/group. There's ways that LLMs could be a *complement* to collaborative study groups, but the current solutions do the opposite.

I love the idea of AI as an always-on resource for enabling and rewarding independent study, but this strength is currently coming at an extraordinary cost.

Expand full comment

Instead of releasing a guide, OpenAI could have easily shipped a new model selector (GPT-4o for students) which would literally just be the existing model with a student-specific system prompt. This doc feels more like PR than any substantive move to improve student usage of LLMs. OpenAI completely controls the interface, they don't need to rely on casual suggestions on a totally separate web page that students will likely never see.

Expand full comment

First off, I appreciate you digging into the world of Talkie so I don't have to. I occasionally think I should go exploring in the weird wide world of anthorporphized LLMs but I don't last long.

It seems to me that the only people moving slower than professors in wrapping their heads around AI in education is OpenAI. Two years too late, and several apples short of a full barrel. Nice that they are telling students that AI is not a shortcut, but a tool to support learning. Not sure any professor will be convinced by a few links in a bibliography.

I suppose you can't blame them too much for anthropomorphizing AI, given how much they have riding on better than human intelligence in the next few thousand hours. Still, this does not seem like much of an effort to guide students.

Expand full comment

If only there was hundred of millions of dollars put into existential risk. But this is where we honestly have a lot in common, because AI persuasion and risk is what harms people near term and leads to existential risk

Expand full comment

They are just following the trend. In the last 50 years, enough technology has been built that negatively impacts children’s mental and physical health. It keeps them glued to screens rather than playing outside. Children are now introduced to technology earlier and earlier in life, which has significant consequences.

Most humans, including children, will use a shortcut if allowed, and AI will enable them to do so. We may see a generation that struggles to write, comprehend, or think critically—similar to how many people now struggle with basic arithmetic without a calculator.

However, humans and children who use AI to augment their abilities rather than outsource them will thrive. The challenge is: How do we make more people understand this? The cat is out of the bag, and there’s no way to return it.

Technology companies are primarily motivated by profit, not ethics. They aim to grow their user base, foster dependency, and secure funding for future AI advancements, ensuring profits for all parties involved. Expecting them to address the negative impact of their tools without regulation is naive. Many companies envision a world where traditional education models are obsolete and replaced by their technologies.

Rather than debating whether kids should not use AI, we should focus on adapting to the reality that kids will use it. We need courses and assignments that build skills AI cannot replace while teaching responsible AI use.

I’m not an educator, so below may be just my lack of experience in the field, but here are a few ideas as we want the children to not get answers directly from an AI that I think may work:

- Scenario-Based Learning: Use real-world problems to teach critical thinking and decision-making.

- Detecting Misinformation: Teach students to identify false or AI-generated content and verify sources. Finland is trying it. Here is a link(https://tinyurl.com/4jdxd5fw)

- Interdisciplinary Projects: Combine subjects like science, art, and history to foster collaboration and creativity.

- Building Emotional Intelligence: Focus on empathy, interpersonal skills, and emotional self-awareness.

- Design Thinking: Teach students to solve problems creatively, iterate on solutions, and think innovatively.

By focusing on these areas, we can help children develop skills that set them apart in an AI-driven world while ensuring they use AI responsibly and effectively.

I was talking to someone recently who refused to use any AI tool, and when I asked him why, here’s the phrase I heard:

"It’s better to be worn out than to rust out."

This perspective is worth considering. I’m not against using tools to augment our abilities, but I believe we should not outsource our thinking. Instead, we should embrace the Royal Society motto:

"Nullius in verba" —'take nobody's word for it.'

This is the mindset we need to instill in everyone. By teaching children to think critically and question everything, we can ensure they use AI as a tool for growth rather than a crutch for convenience.

Expand full comment

How good is ChatGPT at identifying uncited paraphrasing? Can ChatGPT provide feedback like a professional writing tutor, without telling students how to think about a topic? If you were to take a student paper with formatting issues and feed it to ChatGPT, will it accurately identify all of those problems? I'm not going anywhere with this - those are just questions I don't currently have time to explore myself.

Expand full comment