Discussion about this post

User's avatar
Stephen Fitzpatrick's avatar

I love this post for a lot of reasons. While I understand the interest in having an AI write in "your voice," to an extent, this is something of most concern for professional writers who actually have a corpus of content and a voice and style that is capable of being emulated. My fear is two-fold - whether students will try to use academic texts from their research as the basis of the writing style they want to use for their work and whether they just use the better written output as a substitute for their own writing. My own sense, based on what I'm seeing, is that most students would not really go to the lengths to upload their own work just to have an AI write in their own voice. Any student who is capable of that level of sophistication using AI we probably don't need to worry about in terms of academic integrity. But, to me, the more interesting issue with models like Claude-3 and all the rest of them going forward (Marc - what have you heard about GPT-5?) is the increasing level of sophistication working with texts, not just to produce output, but also to interact with in terms of querying the text, getting ideas, probing conclusions, and just using the AI to have a conversation. Prior to Claude-3, I was not having great experiences with AI's ability to granularly comb through lengthy PDF's in a helpful way. But I have found their newest model much, much better. For example, as a big fan of Marc's substack, it would be an interesting experiment just to upload a bunch of his most provocative columns and have a conversation about them with the AI. Those are some positive uses I can see, but as with any powerful technology, the downsides may ultimately outweigh the productive applications. But I do think (ironically based on the points made in the Claude generated post!), there is indeed a reckoning coming where schools will not be able to ignore the issue for much longer.

Expand full comment
John Warner's avatar

I'm glad you shared your experiment because I just did something similar with Claude and had markedly different results. I used it for the column I write every week for the Chicago Tribune, short (600 words), single topic pieces that sort of get in and get out of the subject in a way that (hopefully) offers something intriguing, but which doesn't have the space for serious exploration. I did the same test I'd done previously with GPT-4, which resulted in a sort of uncanny valley version of me that was sort of terrible, and Claude (the most advanced model) was even worse, that uncanny valley sense kicked up another notch to the point of parody. I don't know if this is something about the model or my "voice" or what, but I'd heard lots of people tell me that Claude was better at sounding human and in my specific case, it was markedly worse, at least as I perceive my own voice. Any idea what's happening here?

Expand full comment
17 more comments...

No posts