Robot Writing Tutor-Lexica
On the night of the 2008 presidential election, I sat in an MFA workshop and listened to a critique of a story I’d written by some understandably disinterested folks. People were excited about the prospect of electing Obama, so it wasn’t too surprising to hear the professor talk-at length about his fascination with Nate Silver and how data was going to let us know the outcome long before the polls closed. He was so excited that he talked for an hour, leaving little time to workshop three stories. Mine went last. I got fifteen minutes.
His feedback at the end of my story:
Marc: As always, clear, strong prose. Like your last story, though, this one is dramatically erratic. Last time, a lot happened in terms of conflict and confrontation. Here, virtually nothing of dramatic interest happens. This has an almost sociological feel to it. A “this is how people live” distance. The three characters avoid each other just when they should confront each other. Cordell walks off, Abner drives off. The rest is undramatic misery. Consider the basics. Who’s story is this, who wants what, and who gets or doesn’t get it. Follow? Make the reader want to turn the page. To do that, things that have consequences have to happen.
Gutting, truly. But I didn’t follow his feedback. Instead, I went home, got drunk, watched Obama get elected, fixed some typos and sent the story off. The story won Boulevard’s Short Fiction Contest for Emerging Writers and was reprinted in the Pushcart Prize. Bill Henderson later called me and asked me to be a guest fiction editor for the next volume of the Pushcart Prize. Would any of that have happened if I’d followed his feedback?
Human beings are capable of offering some of the most thoughtful, erudite, and compelling feedback to a fellow writer—they’re also often easily distracted and miss the mark by miles. The one decent thing about using aggregated feedback from generative AI is it by definition can’t be distracted, won’t bullshit you with glowing praise, and won’t heap ridiculous commentary onto you.
Although generative AI often provides lukewarm and generic feedback that lacks insight and interest, it can still be a useful tool for writers. While it takes a writer's expertise to know what to ask for feedback on from the AI prompt, using AI in this way can serve as a digital mirror of our own writing process. I think there is a strong case for writers to try it out. The examples below can all be accessed by clicking the images.
Feedback Assistants
Sudowrite
Sudowrite has one of the better feedback systems in place for creative writers. It uses three distinct feedback personas when offering responses to prose:
Lex
Lex also shipped some feedback features. It’s not finetuned for creative writing, but still offers a helpful set of synthetic eyes to your writing:
MyEsssayFeedback
Eric Kean is developing a feedback system for educators called MyEssayFeedback that lets a teacher create a prompt for students to upload their writing and get feedback.
Human in the Loop
None of these feedback assistants will replace a human teacher or peer tutor, rather, they’re set up to augment the feedback and hopefully provide a unique perspective to a writer. We all write at odd hours, at our own pace, and sometimes don’t have the luxury of attending an MFA program like I did or scheduling an appointment with a human being. When I worked as a peer tutor in a university writing center, I honestly spent more time being human with a writer than giving someone feedback. Most people coming in for assistance wanted to know if they were decent writers, if the fear they felt was justified, or if an idea they had was worthwhile enough to pursue. That’s something you could program into a machine, but the artifice will always cause a human being to recoil.
Prompting for Feedback
Try the following to see if you can create a prompt for feedback. You can ask ChatGPT to develop a persona of an editor, peer tutor, or educator for you. Tweak it to your liking, then test it out on a piece of writing. Click on the image to access the full example.
Human or Generative- an Author Must Decide What Feedback Matters
If I had followed my professor’s feedback, I would have just trashed the story. Too often during my MFA, I saw maturing writers second guess themselves, blindly trust misaligned or just plain awful feedback because someone senior to them, more established, took the time to read their work and that by default meant something. But it doesn’t. It doesn’t matter how many books you published, or how many essays you’ve given feedback on, each of us can get it wrong. That doesn’t mean that GenAI is a solution, but it may have a place in the messy, imperfect process— if we find value in it.