15 Comments

Really helpful perspective across many fronts. In terms of framing an approach, I see things mostly similarly. In More Than Words, the concluding chapters are titled Resist, Renew, and Explore, where I first articulate what should be resisted (e.g., anthropomorphizing LLMs), and then move to what we should renew (human connections, writing as an act of valuing a "unique intelligence") and then moving on to what we must explore (the ways technology can enhance human flourishing).

The challenge, as you articulate here, is that we're trying to do these things simultaneously and the ubiquity of the applications and speed of change (the unavoidable stuff) leaves it hard to pause and orient in ways that allow us to explore productively. My call in the book is to try to make space for the foundational work first, but it's clear this is not an easy thing.

Expand full comment

John, fantastic. I am taking this exact approach across two semesters here in Pittsburgh at a public school. Just wrapping up the first semester, in a course called Ai and Ethics, and we partnered a bit with local universities, especially Duquesne's Center for Ethics. 2nd semester starts in two weeks, and the course is on Human Flourishing. I'm de-emphasizing modern tech from time to time so we can really appreciate how a medium impacts a message, from ancient mediums up to electric mediums. Then back to questions of enhancement within the disciplines to help achieve "the good life". Looking forward to reading your book and hopefully staying in touch.

Expand full comment

It's encouraging to hear that others are thinking along the same lines, especially the human flourishing part, which is the goal I establish at the start of the book by challenging the dominant "benefit" that's touted for LLMs when it comes to reading and writing, increased speed and efficiency.

Life is an experience and the experiences we have in life matter. That's what I try to explore, but the book is designed as a conversation, not a declaration. I'd love to hear more about what you're doing.

Expand full comment

John - I'd be interested in the specifics of the arguments in your book - I'll keep my eyes out for a copy. When you say "resist" what do you mean precisely? I see that you use anthropomorphizing LLM's as an example but I'm not sure how that is resisted (I'm sure you've seen the recent pieces in the NY Times and elsewhere regarding the rise of AI relationships) - who is doing the resisting? As an educator, I share a lot of the skepticism about many of the claims made by AI marketers as Marc has documented so effectively, but my throughline on AI is the unavoidable part. My biggest takeaway from working with students and talking with them about it is they are equally skeptical. I'm assuming in your work you have a place for the potential for AI, but "human flourishing" is another vague phrase. I don't know what the answer is, but wholesale resistance strikes me as unrealistic, especially given that that these tools are hardly static - the conversations in another 12 - 36 months are going to continue to get more challenging.

Expand full comment

So, the "resist" section is part very straightforward recommendations, and part an exploration of the larger attitudes and philosophies we bring towards the activities in which the technology might be involved.

For resisting anthropomorphizing, I'm essentially saying to teachers, journalists, or anyone else who is publicly communicating about this stuff to not use language or framing that suggests LLMs work like human brains or human consciousness. I have some unfortunate examples that I share that I think fundamentally distort the discussion. Another thing I recommending resisting is giving in to notions of "technological determinism" that suggests AI is inevitable because I think it's important to preserve human agency. There's other stuff as well.

My general stance is the book is that AI is unavoidable en masse, but we still should allow room for individuals to reject it, though also that rejection should be sufficiently informed, rather than out of a reflexive fear. The Resist, Renew, Explore sequence is meant as starting point for thinking both individually and collectively about the issues raised by this technology. I purposefully give very few (if any) hard prescriptions about what we should do. The bulk of the book is spent making a detailed case for what is meaningful about reading and writing as human experiences - things that I think don't change no matter what technology comes along - and how we should be careful about not losing those things now that we have this text generating device.

I had to focus on what I think should endure because the technology will evolve. I wrote the book in a way that (I hope) allows it to hold up even as the technology changes.

I don't think "human flourishing" is vague, per se. If I asked 100 people to describe the conditions of human flourishing they would likely have significant agreement. Where there's significant disagreement is what kinds of systems and experiences and structures give rise to human flourishing. The big AI companies essentially assert that an AGI future will de facto lead to incredible human flourishing (unless the AI kills us all that is). I think this is not necessarily true.

Our school system of the last 25-30 years has suggested that demonstrating "proficiencies" according to standardized metrics meant to make students "college and career ready" are a good route to human flourishing because it will help them secure the credentials and knowledge that will allow them to succeed as "human capital." Looking at student disengagement, and school-related anxiety, and how many are turning to LLMs to avoid doing school work that seems pointless and boring, I question that framing as well.

The goal I set for myself in the book is to open up the underlying questions so we can have a better shared, public discussion about this stuff, rather than letting it roll over us, scrambling to adapt short term along the way.

Expand full comment

Sounds like a very interesting read. While I don't necessarily agree (agree may be the wrong word - I think you might oversimplify some of the issues surrounding AI and it's much bigger than just its impact on education which, quite frankly, I think was a bit of an afterthought about those who pioneered the technology), your 50,000 foot view is absolutely warranted given how little time most of us have had to prepare for generative AI in text, image, sound, video, and coding (which tends to get the least attention among humanities teachers.) As a debate coach, we have been debating AI for over a decade. Before ChatGPT arrived, the main arguments in most debates revolved around AI replacing unskilled workers. As it turns out, LLM's are much more threatening to white-collar jobs. AI is not very good yet at performing tasks that require motor skills though it is getting better. I'm spent an enormous amount of time through a grant from my school reading everything I can get my hands on about generative AI and I am finding the most important conversations and opinions I encounter are those I disagree with - it's essential to have your views challenged and contested in order to make them sharper, more nuanced, and informed. I've gone from AI evangelical in education (just look at the evolution of Marc's thinking from his initial posts), to more skeptical, and back again to more bullish given some recent developments and tools (Notebook LM from Google would be an example - this could transform reading but I also share your concerns about reading being a fundamental human experience - I think it will continue to be but perhaps in a more interactive way). I totally agree with you on the importance of recognizing that LLM's are NOT like human brains or human consciousness in any way shape or form. What's fascinating is how we are hard-wried to interact with something that seems to "get us" and "speak" with us in such a realistic way. In the recent Times piece about the woman who is having a full blown AI relationship, she seems completely aware that it's not a real person but it's almost irrelevant - the feelings she gets from interacting with the chatbot almost make that beside the point. I've had similar moments in some AI conversations with Claude that seem so human like that it's hard not to interact with it as one. If that's where we are now, what will AI conversations look like in another 18 months? 3 years? 5 years? How do you resist AI in the job market when your colleagues who use it are getting promoted over you? Last point since it sounds like the primary focus of your book is on reading and writing - military applications of AI may end up being the most significant issue that does make it "inevitable." The Age of AI: And Our Human Future by Henry Kissinger and Eric Schmidt came out before ChatGPT and frames the issues around AI much more broadly. Anyway, I'll get your book and add it to my library on AI. I'm working on my own thing but every time I try to pin down my position something I read upends it. Not sure how many of these books will withstand the test of time! Even in the relatively short term.

Expand full comment

For sure, the focus on my book is relatively narrow against the full scope of the implications of the technology in society. I essentially bit off what I knew I could chew based on my background/experience/knowledge, and what I've published previous about teaching writing. I try to stay as informed as possible about the bigger frames, but I feel more like a spectator than participant in those areas, if that makes sense.

We need as much conversation as possible about this stuff, IMO. Like with Notebook LM and reading, I would ask what is the underlying goal of it being "transformed?" I don't really have an answer against the notion other than reading is already entirely interactive if we're engaged with the reading, which is a skill that I think school doesn't value enough.

Anyway, I'm pleased to hear the book interests you, and I'd be happy to hear any thoughts you have once you have a chance to read it.

Expand full comment

I look forward to it. On the transform reading issue, I'll give you a simple example. I am enrolled in a Great Books program (think Plato, Kant, Descartes, and dozens of other authors of classic and sometimes impenetrable works). Having first grappled the readings on my own, I can put them into Notebook LM and literally engage in a conversation with the text. This is an upgrade - of course, doing the reading, annotating, and questioning on my own is critical. But the opportunity to ask my margin notes to an AI that has access to the entire corpus of the author's work and engage in critical dialogue about it has been nothing short of transformative. If you've read anything by Kant or other authors with page long sentences and difficult and convoluted language, you can be as engaged as you want on your own, but, in my view, this is the future of interactive reading for these kinds of texts. Do I want that with reading for pleasure? No. But for difficult and challenging texts, I'm not sure how this is a bad thing. Especially for students / people who have no access to experts. And it will get better and better and deeper and deeper. But you have to know how to read on your own first and ask good questions which, of course, is a skill. I would not introduce this until late high school, but I don't see how this won't be the norm within a fairly short period. That's my take at the moment.

Expand full comment

"But you have to know how to read on your own first and ask good questions which, of course, is a skill." This is the key, right? And we, unfortunately have lots of evidence that students are getting very little practice at that aspect of reading in school contexts, so a big part of the conversation is how to prepare students for making use of the tool in a way that's additive, rather than a shortcut of substitutive.

That kind of interaction is not new, per se. In grad school I was assigned a critical companion for James Joyce's Ulysses when we were assigned the novel because it was assumed we'd have a hard time "understanding" it. It was helpful to an extent in terms of explicating aspects of the text I never would've groked in a million years, but I found it actively detracted from my engagement with the novel as a novel because it was explaining too much to me. Now, that's a novel and not a work of philosophy, but the process you describe has both benefits and compromises attached to the method and I can see a number of reasons why it could be a bad thing depending on what you're valuing from the experience.

What happens when we don't have new experts because we've decided that interacting with an LLM interpreting for us is a good enough substitute to providing access to experts? Whose expertise is contained in the LLM? What are the biases of that LLM that we can't know because of the opacity of its training data? The process of humans encountering the work of humans will continue to make unique intelligences. Can the same be said of humans interacting with LLMs? Maybe, but I'm not certain about that.

Expand full comment

We need good Socratic discussion across the HS curriculum about generative AI within each discipline. Sometimes using the tools, sometimes just investigating even the discussion around the tools themselves. We're only in a Technolopy for as often as we choose to participate in one, and then have discussions about scope and agency, and even these discussions right here are one example of beneficial digital tools - how would I ever encounter your good work so quickly without you being on Substack. Thank you.

Expand full comment

Smart, thoughtful, and useful as always, Marc.

Expand full comment

Hi Marc! I enjoyed this post. While I know there are many others, my colleagues and I did author a framework for technological inquiry in the Harvard Educational Review that might interest you. It's titled, "What relationships do we want with technology? Toward technoskepticism in schools": https://meridian.allenpress.com/her/article-abstract/93/4/486/497797/What-Relationships-Do-We-Want-with-Technology

Expand full comment

You hit every major point that I see facing higher education here, and in ways that open up conversations rather than shuts them down. Bravo!

Expand full comment

Marc,

Enjoyed this post a lot as I think you weave together so many of the issues about how the infrastructure and sharing of data across all platforms are deeply embedded in everything we do - AI is simply another layer on top of that. I was also intrigued by the environmental analysis you include here - have not seen those numbers anywhere else and most coverage of AI and its effect on the environment seldom put it into the context you do here. One other point and I would be curious of other educator's take on this - my school is involved in something called the RAIL (Responsible AI in Learning) program (https://www.msaevolutionlab.com/rail) and I'm early in the modules, but a very clear point that is made repeatedly is NOT to think about AI as being "integrated" into existing practices but to do a total "reimagining of teaching and learning" - this strikes me as Pollyanish and idealistic and, even if true, seems unlikely to me to happen anytime soon. Most of the "use cases" you see out there are about using AI to do what we have always done (integration), just (theoretically) more efficiently and more effectively. What RAIL is suggesting is this will fail in the long term. To do what they are asking requires a wholesale re-evaluation of educational practices which most schools have not been capable of even before AI came on the scene. I don't know if they are right, but I do understand their point. Just another piece of the puzzle that raises the bar regarding all these issues being dealt with at once.

Expand full comment