17 Comments

Great essay! I would add that it isn't just tools also methods. We can champion all kinds of alternative lower-stakes assignments, and they may help, just as some tools may, but there is no substitute for ethical judgement, critical thought, and empathy on the part of a teacher who cares both about the subject and the student..

Expand full comment

I believe this paper came out before ChatGPT was released to the public in the Fall of 2022:

https://drsaraheaton.wordpress.com/2023/02/25/6-tenets-of-postplagiarism-writing-in-the-age-of-artificial-intelligence/

The part that jumped out at me was this tenet: "Hybrid writing, co-created by human and artificial intelligence together is becoming prevalent. Soon it will be the norm. Trying to determine where the human ends and where the artificial intelligence begins is pointless and futile." I've thought about "pointless and futile" a lot and posts like this make me wonder if all the focus on AI detection will ultimately be a giant waste of time. And yet, from an instructional standpoint, we must be prepared to help students with the basics before they learn where AI may or may not help them in their writing process. The next few years will continue to be a wild ride for writing teachers as we try to crack the code on what works best and what doesn't, all while newly released models are continuously introduced which will challenge us to constantly refine what it all means for students. But I know that students are desperate for guidance and an adversarial approach revolving around grades and assessment is not the most fruitful path.

Expand full comment

Excellent piece!! I’ll just add that one of my biggest worries (which I’ve been talking a whole lot about) is that detection/policing-based approach (whether automated or based on our own intuitions) seem to be re-entrenching linguistic biases in really problematic ways. Sharing links here to a couple of talks (one from CCCC, one from AAAL, both of which had to be pre-recorded because I wasn’t able to travel, which I suppose has the upside that I can easily share them!).

CCCCs talk (part of the “Cognition and Writing” Standing Group’s sponsored panel, “I’m not a robot”): https://docs.google.com/presentation/d/1x-zq27UcPesEgxRUwWxDeZbD4tv4QxXpTlXyZ2uUXWw/edit

AAAL talk (part of a colloquium centered on a book that will be coming out from Routledge, “Rethinking Writing Education in the Age of Generative AI”): https://docs.google.com/presentation/d/12IYqr_7C1ccb7HVq9I1oQ7ncLIV9-tfxRcc97P2_wzY/edit

Expand full comment

Hey Marc, really informative piece. Thank you for sharing. What are your thoughts on authorship determination/linguistic fingerprinting? To me, it is a really effective tool for a larger toolkit of approaches. I think it scares people away because it encompasses hints of *detection*, which is kind of a dirty word right now (rightly so). But it's been proven to be pretty effective for decades now and does not try to detect AI at all. What do you think?

Expand full comment

Hi Mike, My limited understanding is modern stylometry techniques all use advanced machine learning/ neural networks and natural language processing to create authenticity scores about authorship. Turnitin has had a system out since 2018 and tried to sell it as an add on mostly to guard against contract cheating. I have no idea if it is effective when dealing with AI, but I am doubtful. Most detectors are easily gamed. Now that ChatGPT has memory and there are dozens of videos describing how to upload your previous writing to fine tune an output, I wouldn't put faith in any detector.

Expand full comment

Great piece! At the middle school level, I freaked out the first time I saw Grammarly do its new AI tricks on a student Chromebook. And yet I use programs like GPT to help with more and more small tasks.

Truth be told, I've read about some interventions in academic journals yet can't find the stated curricula. So this summer I'm using GPT to reverse engineer materials.

All this to say, while I'll forbid it on principle for the time being, if my students ask, it'll be hard to lie. It's free help and can save untold hours of work. There will be no AI watermark except for my own honesty.

Expand full comment

Curious if Marc or anyone else has seen or tried the approach being used by this outfit: https://cursivetechnology.com. I don't quite get the science behind it, but apparently there is a biometric signature to how we type that is unique to each individual. Personally, I'm anti-surveillance in education at every level, and I wouldn't spend my time trying to suss out whether or not a piece of writing was AI generated, but for systems where this approach is going to happen, maybe this is a superior method. Any thoughts, anyone?

Expand full comment

They're using machine learning to try and predict if your writing is your own. It simply flips AI trying to catch AI and replaces it with AI used to guess if you authored writing. There's no way it is 100% accurate and from the looks of it uses surveillance to monitor the writing process. : "We’ve worked to develop a simple tool that provides “continuous authorship” (otherwise known as “continuous authentication”): using data collected during the writing process we’ve collected and trained a machine learning model that can consistently identify a student across their submissions in Moodle and a Chrome Extension"

Expand full comment

John, it still uses AI to judge student writing, though it is not detecting AI. It is comparing a given student's writing with how they have written on other occasions. It apparently forces them to write in a particular tool that may not be their preferred one or suitable for longer assignments. Here is a demo of how it works in Moodle.

https://youtu.be/GFXYrDMuX_g?feature=shared

Expand full comment

My understanding though is that it's not comparing their writing, but their "typing," the actual manner of the keystrokes. It's AI, but it looks more like pre generative-AI mechanical algorithm analysis that's at work. It makes no analysis of the content, per se, at least as I understand it. Don't get me wrong, I'd never make students write in an LMS or particular system, so it's not for me, but I don't think it's trying to solve the problem in the same manner as the company Marc talks about in this post.

Expand full comment

It's a form of keystroke logging. Employers have been using the technique to monitor remote workers since before the pandemic. It's highly intrusive surveillance and easily gamed. Essentially it is malware that monitors how you use your device. Many vendors sell software to bypass it, even a vibrator to attach to your mouse to show that it's moving.

Expand full comment

I know about that surveillance stuff, but this looks somewhat different to me in that it seems to be measuring the typing signature that happens when writing, not just monitoring "activity" like the corporate surveillance tech that can be gamed with a mouse jiggle. Look, I'd never use this stuff, but it does strike me as a potentially less intrusive, maybe even accurate approach than AI "detection."

Expand full comment

John,

They talk extensively about using keystroke logging in their own company materials. They combine this with adversarial stylometry using ML/AI to predict who authored a piece of writing. You can read it here: https://cursivetechnology.com/writing-biometrics-and-academic-integrity/#_ftn1

Expand full comment

Yes, I think that's correct. That is why I wrote "how" not "what." I would expect a higher level of accuracy than from an AI detector, but still suspect there would be false positives and increase stress. I am also opposed to force students to write longer assignments with a particular tool in a particular way on a consistent basis.

Expand full comment

It’s easier to outsource detection, than to rethink the approach. As GPTs and AI technologies become integrated in everything we do, education has to change. GPTs will likely serve as co-pilots, aiding students and workers in various tasks.

There is a lot of busy work in schools today. There should be more time spent fostering critical thinking and active learning through debates and meaningful conversations, requiring students to read more and articulate their thoughts, which usually leads to better writing.

As a mother of a college student, I've been reflecting a lot about the value my child is getting from school. We're investing significant money in higher education, yet I feel there are a lot of gaps, online classes because the schools are oversubscribed?Meanwhile, his basketball team has resources like a dedicated plane for games. Is this the type of higher education we want for the future?

There’s more to question in the education system than just the use of GPTs.

Expand full comment

What's scary about this is that K-5 students have become so accustomed to using Grammarly in school, that they are unable to see where their thinking starts and AI finishes. Whereas students used to use Spell Checker in GDocs or MWord to check completed thinking, now they rely on Grammarly/AI to complete their thinking for them. I worry that when my fifth graders get older, they will have the rug ripped out from under them and will flounder as writers; all because they didn't realize how much they have been inadvertently using AI to write for them.

Expand full comment

The impression that this website gives is that ethics is something human which AI cannot possible understand or build on. This is due to the degree of feelings and humanity of which such an electronic decision-making device is presumed to be incapable of understanding. I don't believe this to be true and that the kind of thinking and feeling that we have about ethical ideas and ideals can be included in the memory of such an electronic device and its developed associated judgement side of balancing the choice for providing good data and answers about what is relevant. Of course, not everyone's ethical ideals are the same, but this is easily included when the AI memory device has been arranged to include them all!

Expand full comment