14 Comments

It's disheartening to see how, in the name of efficiency, many aspects of our lives are being made programmable to maximize their potential. Conversations and learning experiences are reduced to mere exchanges of information and instructions, with no deeper meaning. Human communication, in all its richness, is seen as the messiest way of delivering information—precisely because of how human it is.

What I find most alarming about the development of these kinds of products is that, even though they offer minimal value, they are still cheaper than paying a teacher or tutor to guide a student. The "service" provided by this technology is poor, yet efficient and cost-effective in the most inhumane way possible, making it scalable and deployable in impoverished or disenfranchised areas. This process ultimately turns human connection into a luxury that only a few can afford. Automation for the many, and human connection for the few.

Expand full comment
Sep 6Liked by Marc Watkins

You make a clear and valid point that I wholeheartedly agree with. I find the suggestion of creating my twin - even if it was "only" an ElevenLabs voice twin - repulsive, much more a twin to teach what I teach. Having said that, I work in a privileged position as a language teacher at a university, with small courses and motivated students whom I can make comfortable to speak and explore the English they are learning. But most of the lectures and even the seminars in the subjects at German universities are attended by hundreds of students at a time. None of them get the chance to regularly ask their lecturers questions or receive feedback on what they might want to say or write, so I can see the attraction of providing a "twin" which would be available to all students. There are no study fees at normal universities in Germany, and private universities are already expensive, but only very few have a really good reputation, so buying a tutor is not in the books (for now). If the AI tutor is subject oriented and offers students the opportunity to engage with the subject in an additional way, I do think this might actually do some good.

And let's not think that if we oppose this development there will be more teachers. There won't. Here in Germany, nobody even wants to become a (school) teacher anymore. At the moment, university jobs are still quite coveted, but the stress level is really high, too.

To me it seems that we need to take action to channel this new technology into a sustainable and helpful direction. We therefore need studies that address how, where and to what end AI tutors are used and explore what happens when students engage with them. We should make students and administrators aware that they may be useful like other tools (e.g. a spell checker), but they will never replace a human teacher because a great part of learning is dependent on human / social interaction.

Expand full comment
author

My fear is that we're already headed to replace teachers. There's no way anyone can provide instantaneous feedback or support 24/7. Lecturing without asking questions seems like a recipe for making learning entirely one sided and transactional. I can certainly see how AI would be an attractive option in this context.

Expand full comment
Sep 6Liked by Marc Watkins

Thank you for what you have written about and it's true, this is entirely an anti-living development, especially without regulations.

Expand full comment

I respect your perspective and agree with your take on how technology is changing the way humans, especially teenagers, are interacting with each other. This is especially apparent when you take away their phones and for the first few weeks they are like zombies unsure of how to interact with each other face-to-face.

But I think this very sincere criticism of technology is misplaced if you are aiming it toward AI twins and the capacity for each student to get a personalized tutor. As you probably know, in a single class, we could have students with a range of reading and writing levels spanning six years. I’m often fortunate if a few are actually at the expected level for the subject I teach. So, these personalized AI tutors can have such a positive impact in my classroom! I understand that there are many veteran teachers out there that don’t need this tool. However, for new teachers, especially those working in Title I schools where the gap between reading and writing levels is often the widest, these tools can play a significant role in closing the academic gap.

Expand full comment
Sep 6Liked by Marc Watkins

As someone who has worked a lot with these chatbots and tried out quite a few tools, I think one thing that frequently gets overlooked is that in order to maximize the benefit of using them one has to be able to write and communicate well to begin with. Asking students who may be struggling with reading and writing to master interacting with a chatbot in meaningful and helpful ways is a challenge in and of itself. Most kids simply do not want to learn this way and if they could utilize the tool effectively likely do not need the help. Students who are struggling are struggling because they lack the ability to write clearly and read fluently - the essential skills you need in order to engage with an AI tool. It's a Catch-22.

Expand full comment
author

Well said, Steve! Many are missing the fundamental skills needed to use this technology.

Expand full comment

I grew up a bit north of your grandfather a few decades later. I've had the same feeling about my grandparents - all of whom were born before 1900 - I'm a late child of late children.

I agree with everything you wrote here. It reminds me heavily too of Shannon Vallor's The AI Mirror. AI cannot create, it can only reflect and reinforce what already exists - be it text, images, or rules of behavior. It seems like Microsoft is preparing us all to be like robots - no doubt what many employers would prefer. AI like this is not a great help. Ultimately it oppresses us and suppresses our development. It does not liberate us from the tiny boxes we inhabit but pushes us deeper in and tightens the lid down even more.

Expand full comment

This is a great recent piece in the New Yorker titled "Why AI isn't Going to Make Art" that articulates many of these points (some in the context of education, most not) in a way that certainly no current AI could ever do. Highly recommend reading.

https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art

Expand full comment

I'm writing this off the top of my head, and not necessarily about an educational context, though it fits into that as well. I am not trying to argue for AI creativity. Maybe I am arguing for a new critical stance.

We already have a tiered approach to the arts - written, oral, pictorial, auditory, performing, etc. We do not treat fine art, folk art, commercial art, designs generated by graphing equations, paint-by-numbers, or coloring books the same. We are able to discriminate between them. We are able to understand their differing aesthetic appeal and treat them in the contexts in which they occur. If we set aside the various questions about plagiarism for the present purpose (not because they are unimportant but because they are and what I am playing with here muddies the waters - it is too incoherent yet to be multidimensional), can we do the same with AI-generated text, images, etc. Yes, it is a very derivative kind of creativity. Maybe it is more akin to found art or scrapbooking. We are already assigning it to some category, or rather we are struggling to fit it into existing categories, but we are not comfortable with them yet - perhaps because of the ideological and philosophical implications as much as the uncanny-valley nature of the output.

Go back to what Chang says in the article about Bennet Miller's use of DALL- E 2 to carefully create the images he displayed at the Gagosian Gallery. Is there a parallel there to found art? I do not mean literally, but is there a kind of creativity through selection that maybe parallels what Joseph Cornell did with his boxes? I don't know. I had not seen Bennett's pieces until just now and need to better understand what he did. I am going mainly on Chang's article.

I bring up Cornell because my literary touchstone for "AI art" is one of the plots in William Gibson's 1986 novel, Count Zero, in which the disgraced art dealer Marly Krushkova is hired by the wealthiest man in the world to discover who or what is making Cornell Boxes that contain objects not available when Cornell lived. She discovers they are being made by the remnant of an AI operating a mechanical arm grabbing debris in the derelict section of a giant orbital resort. Its creativity is literally based in found objects and iterating from the work of a twentieth-century artist. I am not suggesting we are close to that yet, but how would we fit this into our understanding and categorization of art? Can we conceive of a future where we do this, where it is simply another form of imagery, design, writing, etc.

How would we categorize it? How would we evaluate it and its authorship? How would we work with it in the classroom? What would a university art history class on art in the third decade of this century cover and what would it be like?

Sorry Marc, I guess I am hijacking your essay to go off in a different direction. This is just a jumble of poorly coordinated thoughts right now.

Expand full comment

Hey Marc, great article, thanks for sharing.

With respect to giving feedback on AI chats, I'd like to propose an idea.

When we evaluate the interaction, if we think about it as a student pushing a button on a machine, it becomes soul-crushing and painful. Instead, what if we evaluated a brainstorming session through the lens of communication, critical thinking, and creativity. In other words, I don't particularly care if the student "got the most out of the bot." I care if they write well, ask creative questions, provide context where necessary, think-outside-the-box in brainstorming sessions, and critically analyze every suggestion that AI makes.

When I graded the chats, I really didn't need to look at the AI outputs very much at all. There were some cases where a student's iterative prompt made me go "huh?" In which case, I had to go back and read the ChatGPT output to figure out where they came up with such an odd question -- a la diagnosing a misconception. But if we view AI interactions as a reading and writing skill, rather than a machine-use skill (i.e. did you push the right button), the interactions CAN (not always) sometimes be fruitful pieces of text that are worthy of analysis.

What do you think?

Expand full comment
author
Sep 6·edited Sep 6Author

That’s essentially the basic system Blackboard offers users, but it isn’t clear to me what questions I’m supposed to grade. It also isn’t clear why we’d develop systems that supposedly portray human likenesses or identities. What is gained from having a chat with the synthetic Mike from a student’s perspective?

Expand full comment

If it's an open-ended endeavor (i.e. brainstorming or analyzing a task that has a non-binary outcome), I grade their communication, critical thinking, and creativity. In other words give them a large task at the outset that they generally would need help with -- they use AI to brainstorm the early stages of the project -- their prompting and iterations become a process-based writing task. How well did you communicate your purpose? How creative were you in the follow-ups to push the task in new, better directions? How critically did you analyze the AI's suggestions? How thoughtful were you in your choices regarding what you "took" from the AI brainstorming partner? The choices should align with the goal of the project.

Key points: AI has to produce suggestions, not answers or opinions, for this to work.

The task has to be open-ended, so that AI can't answer it. With an open-ended task, it can provide ideas, and the student has to be analytical and make thoughtful choices from the menu(s) of suggestions. From the student's perspective, they (potentially) are getting good ideas from a brainstorm partner that helps them with the overall task. Additionally, I think that feedback on AI use in general is crucial to avoid bad habits. I don't know how we avoid overreliance in future generations without feedback. Students might not get that, but unfortunately I think we have a responsibility to consider it.

That's the use case for brainstorming with a Vanilla LLM. It looks very different if it's a personality bot (synthetic Mike) or a CustomGPT (Contrarian bot, for example). It doesn't work for tasks with binary outcomes.

Expand full comment
Sep 6Liked by Marc Watkins

I have used AI for brainstorming quite a bit and in general have been mostly impressed and helped by the results. But I am very picky and can sort through the dead ends and wrong turns to get the AI back on track for what I am after. The follow up prompts are even more important than the original. My fear (and this has been borne out with working with students) is that students frequently treat the "suggestions" as answers or requirements and do not know enough to actually challenge the assumptions or conclusions the AI is making. I am left with the feeling that asking students to actually read source material on a topic of interest and use that information to help them think of ideas is far superior, more personalized, better for critical thinking, and overall what we used to call the educational process. Brainstorming with AI can quickly become another way of having AI spoon feed the "answer". I'm still confident there will be a place for AI but at the moment the cons mostly outweigh the pros for me. I've developed some new prompts I am going to try with my independent research students this fall designed to stimulate a Q & A dialogue but the AI still loves to do the work for them and "show off" it's training data!

Expand full comment