Unchecked generative AI will create calls for intrusive surveillance the likes of which exist now for many remote workers for all of us—students, teachers, & admin, regardless of the modality we work in.
Many universities are shutting off their AI detector tools because they are too unreliable. Indeed, OpenAI’s recent Text Classifier tool was quietly pulled from the market due to its “low rate of accuracy.” Likewise, Turnitin’s AI detection tool is turning out to be far less robust. The University of Pittsburg has shut down the feature because of too many false positives and does not endorse any AI detection mechanism.
The failure of AI detection creates a much deeper question that is about to play out in the fall—if no reliable AI detection exists, then what are academic institutions and society going to do to deal with generative text? Maya Bodnick’s recent essay ChatGPT Can Already Pass Freshmen Year at Harvard calls for faculty to abandon take-home essays in favor of in-person proctored exams. That’s a nifty idea for in-person learning but won’t work because it only envisions the problem of generative text as students using it to cheat.
Big Tech Envisions Everyone Using Generative AI
What Bodnick and others don’t quite grasp is generative AI technology isn’t just for college students to offload the labor of learning—its siren call of saving time echoes throughout the digital world into industry and across academic disciplines. What concerns me about such articles is the failure to imagine what it means once faculty adopt generative AI tools to grade student work, generate research, provide feedback, etc.
Faculty are terrified that students will offload the labor of learning, while admin should be equally terrified that faculty will offload the labor of assessing student work, and students and faculty should likewise both fear how admin will use generative AI to monitor and assess them. This is just in education. Very soon state and national governments will have to grapple with cheaply available aggregate text and what that means for public trust in individuals and institutions. Police officers, attorneys, judges, safety inspectors, doctors, pharmacists, reporters, editors—will all be presented with the option to offload their labor to genAI, with no reliable detection mechanism.
“Proofing” Your Words To Resist AI
When Big Tech flips a switch and integrates generative AI into Microsoft and Google, they create a new economy, one where being able to prove the humanness of your words now holds value and deeper cultural significance than at any previous point in our digital history. The ability to prove your words are your own, proofing your work, or tracking the process of how you arrived at writing or generating text, becomes a commodity, one many areas of our society will desire now that generative text is common.
In many industrial applications, the act of proofing an object makes it resistant to the influence of outside forces. The ability to proof your words to ensure that they are your own and not generative text may soon become standard practice. This is where autonomous surveillance powered by machine learning, constantly monitoring your at-work interactions, comes into the picture.
This type of surveillance is already a reality for many remote workers. Before the pandemic, fewer than 10% of companies used surveillance monitoring for remote workers. Since Covid, the nascent surveillance industry ballooned, with a stunning 96% of companies now embracing some form of surveillance for hybrid and stay-at-home workers. Keystroke loggers, AI detection through webcam monitoring, mouse clickers, and browser trackers are the new normal for many remote employees.
Many campuses followed the digital surveillance trend with monitoring of their own, contracting with outside companies to host AI-powered proctoring services that required students to install spyware on their devices. This allowed remote proctoring during the pandemic, at the cost of many personal and academic freedoms, but it is nothing like what is about to come.
Generative AI monitoring will require an even more invasive system to match the pervasive deployments of these tools. Imagine having a URL attached to every email you send, document you write, or PowerPoint you create, or each time you log into the LMS, recording every digital movement you make, from drafting to hitting send. This new level of surveillance creates a digital chain of custody for your words—proofing them—and there’s unique value for such authenticity in a world awash in generative text. Under such a system, a simple audit or review could confirm what you wrote or did not write. And these systems aren’t easy to fool because they will use a combination of video, browser lockdowns, and keystroke logging.
What’s fascinating about this isn’t that your employer will want to see such evidence, but tech companies will as well. Large language models that power ChatGPT need massive troves of human data to train on, or else they break down and start losing their utility. Thus, your human writing is crucial in training future genAI systems!
Let’s Hope Resistance Isn’t Futile
What frightens me the most about what I’ve outlined so far is how little control we appear to have in this process, because it is already normalized and present in much of the workforce and in education. Solutionism is the heart of the problem. Many turned to AI detection as a technological solution to curb generative text, which itself was marketed as a technological solution to fix work. Now that the former has failed and the latter is set to become commonplace in our lives, I see a rush to surveil all aspects of our digital interactions as the next proposed solution.
An even darker thought is what employers and state legislatures could do with this vast trove of monitored data. Forget a conservative-backed state audit reading faculty emails at public institutions of higher learning. That’s far too simplistic and unimaginative given the nature of the technology. Imagine instead an auditor using a language model to run a sentiment analysis on the type of feedback a faculty member provided students over a period of years, probing for any hint of political bias, any data suggesting higher or lower grades for certain groups of students. If it sounds Orwellian, that’s because it absolutely is.
Our desire to preserve what is human about text may mean ceding greater autonomy and freedom to automated systems we’re actively attempting to deny in our digital interactions. Instead of using AI to catch AI, as the previous generation of detectors used, your company, school, or state government may embrace AI-powered surveillance to monitor the work they receive from you to ensure it came from a human being, not a language model. Otherwise, there will be no way to prove your words originated from you, further chipping away at the artifice of trust we assume in our digital interactions.
Too dark for you? Trust me, I prefer AI turning me into a battery than this scenario.
Found this Helpful? Sign up for My Professional Development Course!
Generative AI in Education has over twenty sections and the first pathway is about AI literacy. Such understanding will be foundational in education as major tech companies continue to deploy and scale generative AI systems in public. I’ve released assignments from the course under free CC-By licensing and offer discounted group pricing, scholarships, and OER Fellowships for access to the course.
Hey Marc! As someone deeply immersed in the world of regulated and safety-critical technology, I couldn't agree more with your insightful article. The potential impact of generative text on surveillance is a real concern, and your predictions about the future are spot on. Let's hope we find a balanced solution to tackle this challenge. Great work!