10 Comments
User's avatar
Marcus Luther's avatar

"Adopting or resisting AI feels increasingly impossible."

This might be my favorite piece of yours as far as its acknowledgment of how stuck we sort of are—and how we are lying to ourselves if we believe either enthusiastic adoption or resistance is even possible. (Indeed, the "enthusiasm" either direction feels like self-delusion the more I step back and scrutinize it.)

For me, I'm lock-step with you in that the right path is to be intentionally slow and deliberate while stubbornly curious.

A very-bad analogy for this, perhaps, would be the opening Red-Light-Green-Light game from Squid Games; those rushing ahead are destined to, well, you know if you know—but if you wait too long to move forward the results are equally grim.

Expand full comment
Marc Watkins's avatar

Thanks, man! I think we're going to be in increasingly challenging situations going forward. I hope we can avoid the Squid Game or least get the SNLish parody of it!

Expand full comment
Stephen Fitzpatrick's avatar

Great post with lots of provocative and unanswerable questions. My one quibble is I think you take the VandeHei quote out of context - he is speaking to his finance and legal team internally. For them, it is career suicide to not be experimenting. It further underscores the disconnect between academia and industry. Who are the kids going to listen to? While it may not be career suicide if faculty don't at least get familiar with AI tools, I think it may be career irrelevance. But what this post gets at that I wish more faculty would understand is the breadth and pace of change that has upended the entire conversation. At the moment, none of this looks like it is slowing down.

Expand full comment
Marc Watkins's avatar

I read it much more as "here's what we're doing at Axios and what you should be doing, too." I think he's overly optimistic about developers fixing all of the issues with statements like this: "The bottom line: Experiment assuming the current glitches — usually hallucinations or incorrect answers — will be fixed as models improve. These glitches keep us from currently using AI much beyond experimentation and augmentation."

Expand full comment
Stephen Fitzpatrick's avatar

I'm not sure he would necessarily say that the "use it or lose it" attitude should be applied to educators. There is an interesting tension of some AI industry proponents within their fields being nervous or skeptical about their own kids using AI in schools. But my sense from what I am reading and the people I am talking to is that, while the current issues may not be fixed immediately, businesses cannot afford to hold back and be caught flat-footed if and when the models do reach that point. That is certainly what is behind Dario Amodei's comments and predictions. Academia and industry are fundamentally misaligned in that way - what you see as a strength in higher ed to be slow and deliberate is definitely viewed as a path to obsolescence for companies.

Expand full comment
Stephen Badalamente's avatar

This is an amazing post Marc. Obviously, you don't believe that we shouldn't try to wrestle with these issues, but you also cannot guarantee we'll win (whatever winning looks like). I'm concerned that the labs that do the longitudinal studies on the impact of technology on learning will not only be unable to keep up with the pace of change, their funding will be eliminated. I'm wondering whether the time that I've spent trying to understand the research capabilities of AI is wasted energy; not only in the sense that the behaviors change during my investigation, but that we have no reason to believe the actual energy costs are sustainable. I think that all the fear and anger this topic has raised among our faculty these past years isn't excessive - if we focus on the right problems, we'll find it warrants much more.

Expand full comment
C. O. Davidson Is Haunted's avatar

I’m contemplating all handwritten tests and essays for in-person classes this fall, but I do have 2 major concerns: 1. Being able to actually read students’ handwriting and 2. Students who need to use laptops for accommodations suddenly stick out. But as an English professor, as candid as I’ve been about what is appropriate AI use and what is not (knowing with grammar programs a full-out ban is not reasonable), I have come to the fact that most of my students’ work is likely partial or full AI-generated and it’s getting harder to tell. This summer in my online classes I feel like I’m grading the AI programs and not my students. I’m running out of ideas.

Expand full comment
Apis Dea's avatar

Fantastic article. It makes me happy to be retired, but strangely it makes me want to get back into teaching. IMHO teachers are needed to teach context. I can't belive AI can inspire as much as a good teacher.

Expand full comment
Miriam Reynoldson's avatar

"... the authors of the University of Sydney’s two-lane approach to assessment, envision separating education between secure and open assessments—give me a dose of AI-proof assignments along with an acknowledgment that anything outside of secure assessments could be generated via a machine. There’s certainly a logic to this, but each time I think about it, I find myself incredibly saddened by how quickly some have decided the way forward is to completely rebuild how we assess learning because of AI."

This is exactly how I feel. In Australia, we have been struggling towards bettering our assessing practices for years and years without real funding, and the two-lane approach is now garnering vast material resources to rebuild assessment infrastructure across University of Sydney and beyond, in the name of optimising for generative AI.

Many of these "optimisations" will reverse long- and hard-fought wins towards better assessment practice. But "generative AI", apparently, is a more important keyword than "inclusion" or "validity".

Expand full comment
Paula P's avatar

I appreciate all of your thoughts and advice here.

Expand full comment