15 Comments
User's avatar
Stephen Fitzpatrick's avatar

All of this was predictable as far as browser agentic AI, though the recent trend with companies (not just perplexity) essentially leaning into the worst parts of AI insofar as they impact teachers, students, and schools is what is so concerning. Even the release of ChatGPT's "study mode," which I wrote about over the summer, seemed benign on the surface, perhaps even helpful, ignored the fact that the ultimate goal of these companies is to get a generation of users addicted to a tool that they cannot live without - of course they knew they would use these tools to complete work assigned to them "without lifting a finger." The recent release (just this week) of Claude's skills is a powerful tool that allows users to customize instructions and workflows to do impressive things like generate very solid presentations, excel spreadsheets, and documents that look professional and will continue to get more and more accurate - Claude is leaning into becoming the goto model in the business community. Why wouldn't students also be expected to take advantage of these tools? The collision of corporate values vs. educational ones will continue to be on display, especially as these companies become more and more desperate for revenue. As much as I understand your position, I'm skeptical that a boycott of Perplexity is likely to have much impact.

Expand full comment
Marc Watkins's avatar

I doubt it will have an impact, but I think educators have a voice that often is ignored in many of these instances and using it can make some difference. I saw Claude's announcement, but haven't had a chance to review it.

Expand full comment
Rob Nelson's avatar

Thanks, Marc. This is well-timed for a theme we'll be tackling in my class next week about how AI is changing higher education: What's so critical about critical AI. I especially appreciate you succinctly describing the form of the short video ads, what you call "the common script." I suspect my students will enjoy the critical analysis of the form along the lines you outline.

I agree that shaming and boycotting companies that can't (or won't) align their marketing with their "responsible AI" statements while urging students to choose other options is correct. In fact, I wish colleges and universities would coordinate more through governing bodies like Educause to call out the "move fast and break people" mentality that drives AI companies to prioritize growth at any cost.

The risk, of course, is that we end up sounding like the worst stereotype of the nineteenth-century schoolmarm as we wag our collective fingers, thus inadvertently strengthening the bad-boy appeal of AI outlaws.

As a strategic matter, I think your opening is what's most important about this post: we need to lead students to understand how Silicon Valley operates through media and culture to sell us harmful products and experiences, a lesson that can be applied throughout their experiences of consumer culture.

Expand full comment
Andrew Cantarutti's avatar

Grammarly’s new ads rub up against cheating too: https://youtu.be/W2P7eixJOgc?si=susxmIN20Eci95bg

Expand full comment
Bette A. Ludwig, PhD 🌱's avatar

Wow, Grammarly has changed from when I remember using it a few years ago.

Expand full comment
Jane Rosenzweig's avatar

I clicked on one of the FB ads and now I'm seeing all of them, including a series with the tagline "don't lift a finger" (But also, what college students are on FB to see those?)

Expand full comment
Annette Vee's avatar

I agree with Marc that this blatant marketing for cheating is terrible. Perplexity has been a sketchy, no-holds-barred company from the beginning though, so I'm not really surprised. And Srinivas was retweeting the video--he was, to my mind, obviously promoting it. He wrote "Absolutely don't do this" in the same spirit I've heard instructors say "You shouldn't visit sites like LibGen that give you your expensive textbooks for free."

Expand full comment
Nick Potkalitsky's avatar

Amen! The goal, at this point, appears to be the wholesale dismantling of the educational system

https://nickpotkalitsky.substack.com/p/the-reckoning-sora-2-and-the-year

Expand full comment
Joseph Thibault's avatar

FWIW, Facebook states that academic fraud is strictly prohibited on its platform going so far as to state "Ads can't promote: Dishonest practices...[including those that] Enable people to cheat on exams or drug tests"

https://transparency.meta.com/policies/ad-standards/deceptive-content/cheating-and-deceitful-practices/#:~:text=Enable%20people%20to%20cheat%20on%20exams%20or%20drug%20tests

Now, from personal experience the process for flagging these is either broken or managed by someone without a basic understanding of the term "cheat on exams" and "can't" but I would encourage anyone and everyone to flag the ads they see for Perplexity on IG and Facebook (and Google) that specifically include the bits about exam taking by the browser.

If you see something, do something.

Expand full comment
Jason Gulya's avatar

I had such high hopes for them. Their advertisements for Comet destroy any trust I had in them.

Expand full comment
Szymon Machajewski's avatar

Opposing a new tool by banning it, especially when that choice limits students’ opportunities to learn, often feels less like sound pedagogy and more like teacher grieving. Every shift in educational technology meets resistance before acceptance; this is hardly new. Sites like NoNeedToStudy.com have student testimonials in writing and video. Teacher outraged reactions, somewhat ironically, helped push more attention and their own students toward the business.

Unrestricted LLMs are freely available on Ollama.

Instead of blocking tools like Comet, educators could redirect their focus toward ethics and responsible use. The tool can expand accessibility under ADA guidelines by reading complex webpages aloud, supporting visually impaired and neurodiverse learners, and easing comprehension for ESL students. At the same time, it presents a chance to build digital navigation, critical thinking, and data literacy, skills that increasingly define modern education.

If students end up in ethically questionable marketing for AI products, perhaps the deeper issue lies in what they haven’t been taught. Their choices may point to gaps in our instruction, not just their judgment. The solution, then, isn’t restriction but education, helping students think carefully about digital citizenship, AI ethics, and what honest learning actually means.

Teach ethics. Teach technology. Support students where they are. That’s how artificial intelligence becomes a tool for deeper human understanding, rather than a threat to it.

Expand full comment
Ekul.'s avatar

They're targeting students because these companies are desperate for any way they can figure out to actually make money from AI, which is still something we've seen no real evidence of. Like the old Big Data bubble, they've got huge valuations and big promises, without any idea how to make it profitable.

Expand full comment
Austin Morrissey's avatar

For such a modern company I was surprised with how trite the commercial was; I had thought it was initially the individual marketers choice on how to present the brand, but it is much worse if it is the official PR stance

Expand full comment
Paul Wilkinson 🧢's avatar

A senior last year argued any attempt to block AI, morally or technically, was a pointless waste of time given that the majority of their classmates lack moral integrity sufficient to stop themselves from using any available AI, and that any technological barrier can be overcome. If true, that would leave teachers to find engaging enough learning activities that can’t be completed by AI if we want to reach the learners who most need to establish the moral discipline to do their own work. “Their own work,” however, is changing in a world that demands the use of technology to upgrade human attention, productivity, and quality. As cynical as Perplexity’s ads are, its AI marketing no doubt told them it was the most effective they could produce, sadly confirming my senior’s observation of the moral landscape. Alas, we can’t teach in an imaginary world where brain rot has not crept like termites into our structural framework. We can’t change Perplexity’s marketing. We can only find ways to stay one step ahead by altering learning tasks to recognize that the depth of the challenge goes well beyond floating laments into the ether.

Expand full comment
Madame Patolungo's avatar

This is great commentary that I appreciate and am happy to share with colleagues but I don't get the conclusion and almost wish that you'd revise or delete:

"We badly need to move beyond talk of AI slop, model collapse, or general criticisms that AI is bad at performing all tasks. Instead, we should start being honest when these tools work and are effective, while honing meaningful criticism of when they are not and cause negative and often lasting impacts on our culture. Selling AI to students as a means to avoid learning must be one of those, and companies that engage in it should be put on notice that this practice isn’t acceptable and has consequences."

I'm sorry but this is unnecessarily condescending to your readers: whomever they might be. Who is this "we" in need of such a dressing down? Of course, we should be honest when the tools are effective: do you think the default position of your readers is dishonesty?

And absolutely - let's "hone meaningful critique" when gen AI "cause[s] negative and often lasting impacts on our culture." Because the latter is indeed the elephant in the room. But do you really think that "we" are so out touch that this is breaking news?

Expand full comment