12 Comments

This point nails the central problem: "we are all involved in a massive public experiment no one asked for." And at the same time, you helpfully digest and feature some of the most dramatic developments in AI (for just this week!!). One point I and others made at the AAC&U AI Institute was that it's impossible to keep up, even for those of us spending hours a day attempting to do so! Your critique are your practical guidance are both useful in navigating this rapidly shifting terrain.

Expand full comment
author

I thought you gave a wonderful presentation, Annette! I agree, we cannot keep pace. I think one area we can focus on is talking about exploring use cases and frameworks by balancing traditional skills with the existing skills we teach. I also think we need to find more funding for fellowships and stipends to pay people for their time.

Expand full comment

I feel exhausted after reading this post. 😅

Expand full comment

This! So much happening and not even close to the bandwidth to grapple with it.

Expand full comment
Sep 15Liked by Marc Watkins

Marc - Thanks again for keeping these posts coming. I lost steam a while ago trying to keep up with everything that was going on - I had played around with Google's Notebook LLM a while ago but it made so many mistakes when I queried PDF's and other documents I provided that I gave up. But the addition of the podcast feature definitely blew me away. And this is happening all over the place with all sorts of different companies and different audiences, etc... There is just no way for even those of us who are inclined to experiment with AI can possibly stay abreast of the latest developments. Not sure what the solution is. In some ways, this feels a little like the explosion of tools that have been developed to support teachers over the past 15 years or so - some teachers use Quizlet or EdPuzzle or Gimkit or Kahoot or the other hundreds of pre-AI online products available. Now, AI is the shiny new thing and all of these tools need to incorporate AI features and make all sorts of promises about ways it can save time, be more efficient, etc... It's overwhelming. It's also telling how quickly we get complacent with AI abilities most of us thought were total science fiction even just a few years ago. What, if anything, are you hearing about GPT-5? The Altman post makes me wonder if he is deliberately underpromising so the next major release won't underwhelm or, as he has suggested elsewhere, major new breakthroughs are in the offing. Last comment - I went to an AI conference at MIT this summer and was disappointed. The bottom line - like William Goldman with reference to the movie business - No one knows anything. It's pure speculation and guesswork. We just have no idea where the AI story is going to lead.

Expand full comment
Sep 15Liked by Marc Watkins

Spot on! There is also a kind of desensitisation that comes with the rapid change too. It feels commonplace now to see something that would have been unimaginable pre-ChatGPT. We’ve hit peak 🤯, although I don’t think that’s going to stop the AI boosters overusing that emoji. 😂

Expand full comment

Marc, I had not played with the "podcast" feature in NotebookLM until yesterday. I am both amazed and made uneasy by it. While it is amazing that it can do it that well, it is also clear after just a couple of experiments that it is formulaic and that it is not necessarily drawing the kinds of deep conclusions that come with reading. At the moment it feels a bit insidious. It undermines the inner dialog one has when reading. Instead of allowing a reader to internalize an article and interact in their mind with other things they have read or thought, it turns it into a fairly trite podcast that extracts errors and reinforces conclusions without questioning them. There are no fresh observations. An important set of thought processes has been externalized without any gain other than ease of understanding. As you and others have noted before, there is a need for friction, for wrestling with words and ideas. This is an externalization of a thought process that we may regret externalizing. I need to work more with it, hear more examples, but, for now, it feels like a dumbing down.

Expand full comment
author

I agree, Guy. It's all novelty at this stage. We're all so exhausted by the deluge from so many A apps that many will likely mistake novelty for something more profound.

Expand full comment

Your last few paragraphs recall Neil Postman's points about technological change. These developments aren't just tools that we use or don't use. You don't drop these AI "tools" into the world and get the world + AI. No, we are going to see ecological change with rippling effects, unintended consequences, and disproportionate impact. There's going to be massive societal implications that we haven't imagined yet and may not be clear until we're too far down the road to turn around...

Expand full comment

"Desirable difficulty" is a great phrase for the toil of learning.

Expand full comment

What fools are we, who cannot see, woods for the tree!

Expand full comment

How are we to know that this information hasn't already been processed by the AI entity, to make it biased in a way that frightened us for the need to adopt more AI, rather than less? Perhaps this comment is also being generated by the some biased source! It is my conclusion that we badly need to develop an independent critical feature so that we can read and edit anything that is being generated through AI, before it is released for public consumption and the regulation will specify that such degree of editing must be done and its level recorded along with any new material being released. Without this certification it should automatically be scrapped. Do you believe me when I claim this applies to what I am writing here too? I have been brought up to believe that only a tenth of what you hear and only half of what you see is worthy of my attention as being true.

Expand full comment