Discussion about this post

User's avatar
Stephen Fitzpatrick's avatar

Of course you'e seen the entire Moltbook story which seems like it was predicted by your Maggie Appleton example. My one quibble here is that, while I completely agree that the mindset shift is the critical one, folks are still going to have to get comfortable with the tools and figure out what to use and how to use it. AI 2027 really drilled down on the importance of coding improvements which it look like we are getting, but the more AI moves in that direction (and OpenAI's recent drop of 4o essentially acknowledged they are more focused on coding than writing), the tougher it will be for non-technical people to navigate these systems. There is going to be a baseline level of understanding and using AI platforms to even get to the point where you can take advantage of the agentic workflows in the first place. Most people I know would not be able to install Claude Cowork. One of the biggest myths I've encountered is that because younger people are "digital natives" they automatically know how to use these systems better than adults. It misses the point that these platforms and how they operate are new to everyone - there are some things kids can do better online, but I have not seen it when it comes to using AI.

Stephen Badalamente's avatar

Lately I've been trying to create a solid prompt for Copilot to help students with APA style. At first I was really pleased with the output - until my colleague had completely different results. Trying it with a more random (and realistic) set of citations I found even more errors - and they changed with each 'improved' prompt. Finally, I realized that it was writing and running new code every time it ran the prompts, which not only was introducing novel errors but also seemed unsustainable.

There are a couple of things I have realized (aside from copilot just isn't cutting it). First, I wasn't paying close enough attention to what it was actually doing - yes, it was 'helping' me with my prompt, but it wasn't asking me what I really wanted or telling me what approach it was taking. Second, I was spending a lot of time trying to facilitate what I believe is a pointless student task: at the lower division undergraduate level students just need to understand citation fundamentals, not whether to capitalize this or italicize that.

So yes, I wasn't in the right mindset. I wasn't thinking critically about AI and I wasn't reflecting on my behavior in response to AI. I could have manually fixed every citation for every student I was trying to help and had time left over to complain bitterly about it, instead of teaching AI how to do the 'grunt work' (as it put it) for them.

2 more comments...

No posts

Ready for more?