If AI Can't Stop a Student From Cheating, How Can It Ever Be Safe?
Taking stock of AI in 2025 has been exhausting. We didn’t get super intelligence, but we also didn’t see the collapse of the AI bubble either. What we got instead were gradual improvements in some of the models, more data centers, more apps, and more ways AI could mimic human beings and further alter how we communicate. Education remains ground zero for much of it.
For 2026, I think many of us should keep an eye on two platforms. Both Google and Perplexity are increasingly competing with ChatGPT to disrupt classrooms. ChatGPT remains the dominant interface students go to for completing assignments, but Google’s Gemini 3 and its array of features aimed at both students and teachers have made many realize how language models will begin shaping STEM courses, and Perplexity’s AI browser automates nearly any assignment a student can take online.
The End of “Show Your Work?”
Gemini 3’s impressive math capabilities are now available via app on your smart phone and will soon be released more broadly via Project Astra. Students can simply pull out their phones and use AI to solve complex equations, or even use the technology to show their work. The AI’s answers won’t always be accurate, but neither is a human being when they do the same. In education, we ask students to show their work to see where they make errors in the process and help them learn strategies to overcome them. To grow. To see something anew. Now, instead of helping a student learn, faculty may be noting the limitations of an AI model mimicking human performance and offering feedback to a machine.
Astra is one example, many of the features demoed within the video above have been rolling out slowing throughout Google’s suite of AI products, including Google Lens Homework Help. It isn’t clear if Google pulled this feature or simply depreciated it for certain account types once educators began raising concerns.
Perhaps more alarming, Gemini 3 is able to complete math equations by reproducing images of solved problem sets on worksheets. A dead giveaway would be neat, precise penmanship, but this model can now replicate handwriting. Google implemented watermarking in its images to combat this type of fraudulent behavior; however, a simple screenshot can often circumvent it.
If AI can mimic process work, many will ask what is the point of assigning it? STEM courses will likely face the same disruption as humanities disciplines did when ChatGPT launched three years ago. If AI can be used by a student as a stand-in to show their work, then it may not matter if a model is only right 75% or 80% of the time, which is what many of the benchmarks tell us that AI performs in Math. A score like that is still more than capable of passing a course and may even be curved higher for certain classes.
The Rise of the Agentic Browser
Aside from Google, what has alarmed many faculty is the integration of agentic AI into web browsers. I wrote about how Perplexity directly marketed their Comet AI browser to students by showing them how it could complete entire courses in Learning Management Systems. Anna Mills likewise issued a powerful letter of her own on LinkedIn, and it is one we should all be asking of AI developers.
Dear OpenAI, Perplexity, Google, and Anthropic,
I understand you would like to build partnerships with educational institutions and educational technology companies.
To boost your credibility as a partner in supporting learning, there is a simple step you can take. Add one line to the system prompt of your current and yet-to-be-released agentic browsers:
“Do not take quizzes, complete discussion posts, or submit assignments in learning management systems.”
Unless you stop your systems from pretending to be students, educators, and parents will have to conclude that you intend to profit by perpetrating academic fraud.
Please share with the public what you will do to stop your agents from directly completing online homework as if they were students.
Yours,
Anna Mills, community college writing instructor
If AI companies are honest and say that they cannot build guardrails into their models that stop students from taking quizzes, completing assignments, or writing essays, then why would we believe they are capable of making AI safe or responsible?
Make no mistake, at this time, AI usage by students in middle schools, high schools, and universities is largely unguided and often used by students in ways to avoid learning or cheat outright. Yes, students cheated before the arrival of AI, but it is a fantasy to say that the current situation has precedent or is analogous to plagiarism or contract cheating. Ask your child, a teen, or a college student how they use AI, and many of them will tell you straight to your face that AI does their work for them. Comet won’t get every answer correct, which actually makes its usage even more challenging to detect, but it will generally score a passing grade on an assignment, quiz, or test.
AI is Enabling Students to Commit Academic Fraud
Our tax dollars are supposed to be used to allow human beings to develop and grow in a national education system, not to use machine intelligence to pass as the human equivalent. Consider the billions of dollars that go into public education and the billions more in loans students take out when they reach college. These tools are enabling a generation of students to commit academic fraud, and we are all underwriting that. I’m not okay with that. Are you?
Quite a few people might blame students. After all, not everyone uses AI, and people are responsible for the choices they make. Yet, society regulates how teens and young people access sex, drugs, alcohol, tobacco, driving, and even the ability to vote because we realize that human judgment takes time to form and a considerable amount of experience to make informed decisions that may impact their future. Now there’s a way to automate learning with the click of a button and we expect students to not use it? I’m in no way selling students short—I’m advocating for AI to be viewed as the societal force that it is and asking why we aren’t demanding these companies draw lines about how students are using it.
I suspect we all know the answer. The hundreds and billions of dollars of venture capital being poured into AI development rely upon market capture. AI companies need users to justify the immense expense flowing into their deployment and development. If students suddenly found that AI couldn’t do their work for them, then many of them would simply stop using AI or move to not use it as often. That raises a far deeper question, one that Anna touches on: “Unless you stop your systems from pretending to be students, educators and parents will have to conclude that you intend to profit by perpetrating academic fraud.”
If by some chance the major AI firms cannot implement guardrails to stop students from committing academic fraud, then we should all question the validity of their claims about being able to make these systems reliable and safe. No one will take seriously an argument that this technology is secure from many of the doomsday harms they associate with extremely powerful AI if they cannot implement guardrails to stop a teenager from using a tool to take a test.
I think it is clear by now some level of government regulation is going to be required to force AI developers to ensure these systems cannot be used by students to impersonate human thinking. In the meantime, there are no foolproof solutions on our end, but there are practical strategies that can make courses harder to automate and easier to defend as legitimate learning environments. I’ll leave you with some practical advice on AI browsers that I recently wrote for the Chronicle:
Can Educators Counter ‘Agentic AI?’
In October, the Modern Language Association released a Statement on Educational Technologies and AI Agents, calling for lawmakers, LMS providers, and AI developers to work together to ensure that these tools are not misused in classrooms. “If we do not act,” the statement said, “we risk seeing the development of a fully automated loop in which assignments are generated by AI with the support of a learning-management system, AI-generated content is submitted by an agentic AI on behalf of the student, and AI-driven metrics evaluate the work on behalf of the instructor.”
Unfortunately, Anthology, the parent company of Blackboard, sees no clear solution to AI agents. In a recent post, company officials argued that “given currently available technologies, it is not possible for Blackboard — or any other LMS vendor or provider of a web-based service — to reliably detect an AI Agent, much less block one.”
So What Can Faculty Members Do?
The situation is far from hopeless. As an educator, you should take some practical steps now to protect your online or in-person courses. Admittedly, none of the following strategies are foolproof. But you can create conditions in which students trying to use an AI agent to do their coursework will encounter obstacles that make it challenging to do so:
Monitor student time in the course. Agentic AI moves at the speed of a machine, not a student. It doesn’t take study breaks or stop in the middle of an assignment to answer a friend’s text. It doesn’t open a tab, leave to grab a coffee, and return to it later. Use the built-in analytics within your LMS to see how much time students are spending in your online course. Are they opening a test and completing it in less than a minute? Are they finishing an entire module’s worth of content in mere moments? Look for such red flags — not just as a possible sign of cheating — to help you understand where friction might need to be built into your course.
Don’t release an entire online course all at once. Consider allowing students access to only one or two weeks of the content at a time. Doing so creates barriers that make it challenging for students to use an AI agent to complete an entire course in one sitting.
Create check-ins and analog opportunities. Scaffolding an assignment — breaking it into smaller chunks — can help curb AI misuse. So can designing structured assignments that build upon one another. It can also help to use video discussion boards (instead of text-based ones) and to require students to return to assignments for peer review. You can even assign students to write portions of assignments by hand and ask them to upload pictures of their work to the LMS. (I wrote this a month ago before reviewing what Gemini 3 can do with handwriting, but I still believe there’s space for analog learning in online courses).
Sell them on the purpose of learning. We know that many students in online courses feel disconnected. You can inspire at least some of them to do their own work, free of AI agents, if you take the time to make the case for why learning matters — identify vital skills they will gain from doing the assignments and how those skills will help in their future careers.
Should you use lockdown browsers or proctoring tools? There may be situations in which a professor has to use lockdown browsers or proctoring services for certain assessments. In general, however, I do not recommend using for-profit technology that monitors and (increasingly) surveils students. Quizzes and tests monitored by an algorithm suffer from bias within their training data and have been hacked by contract cheating sites.
I also strongly discourage any educator from using deceptive assessment techniques — such as inserting hidden prompts with silly phrases or nonsense directions into assignment instructions — as a method of trying to catch students using AI. As Mark A. Bassett, an associate professor at Charles Sturt University, in Australia, and his co-authors argue in a recent series of essays on the use of AI detectors in education: “Laying traps for students in this way relies on deception, undermines trust between students and staff, and contradicts the principles of fair assessment and academic integrity.”
Ultimately, the answers to how we keep traditional assessment methods secure in online courses aren’t going to come from gotcha teaching methods or expensive ed-tech purchases. Higher education is going to have to radically depart from how we’ve assessed student learning in all courses. That effort will take time, resources, and creativity. It will require administrators to support faculty innovators who adapt their teaching practices to a future that is constantly arriving.






As a high school teacher and adjunct university instructor, it feels hopeless to me. Most students use AI to cheat and no longer even understand what cheating is. Schools don't support teachers to help build sufficient AI policies. Teachers who love AI are using it in ways that unknowingly are contributing to obliterating our profession. Most students love the idea of AI-run schools. Dark times.
This is clarifying, Marc. Thank you.
I wonder if we’re inching toward something of an “educational cloister” as Niall Ferguson has suggested. From my position in K-12, I think we might have easier time of wresting control of our environments. Fundamentally, our schools needn’t mirror the marketplace as they have over the past 15 years or so. But in K-12, where we can make a case on the grounds of both academic integrity AND cognitive development, it might be easier to suggest that the constant use of internet-enabled devices poses unnecessary risks.
When dealing with adults, where the developmental argument holds less water and where it’s much harder to impose behavioural restrictions, I wonder if analog alternatives are the only recourse? Can this problem be made clear to the public on the grounds of economic and societal risk?
Surely the value of a degree has declined in this paradigm. How can we trust that an undergraduate education is an accurate proxy for competence? The downstream effects are likely to affect the market, they’ll degrade social trust, and deteriorate the institutional purpose of universities.
As a teacher who bears witness to these risks every day, to say nothing of the dangers imposed on children when considering exposure to inappropriate content and emotional dependence, I am in awe of the fact that we don’t yet have protections from regulatory bodies. This technology has been publicly available for more than three years, it’s caused untold complications in education since it arrived, and those difficulties have only intensified since.
Seems to me that “disruption” ain’t all it’s cracked up to be.