Discussion about this post

User's avatar
Marla Simpson's avatar

I want to challenge one of the underlying assumptions contained in this post, an assumption that has been repeated as fact without critical consideration – the assumption that the current adoption of LLM Gen AI is only the beginning of a larger trend of widespread adoption. This is absolutely not a foregone conclusion. The money being lost on OpenAI is absolutely unprecedented. Open AI lost $5 billion in 2024 and loses money on even its paid subscription services. This technology is only offered to the public through absolutely massive amounts of speculative investment. Offering a cool tool to the public in hopes that it will someday become essential will only satisfy investors for so long. They will want to see user data first and then profits.

Regarding user data, according to OpenAI's own propaganda piece, Building an AI Ready Workforce, “More than any other use case, more than any other kind of user, college-aged young adults in the US are embracing ChatGPT…” No wonder OpenAI is doubling down on providing AI to students. Adoption by students is driving their growth. By contrast, despite business leaders’ excitement about AI generally, the actual workers tasked with using it have mostly not found much help from a general information device that is frequently inaccurate and lacks privacy protections. Few jobs are general. Most work is particular. Even in areas like customer service, there have not been any of the anticipated business disruptions. So, what industry has been disrupted? There are a number of business indicators that now suggest that AI has disrupted what I would consider to be the cheating industry. Companies like Chegg and Quizlet appear to be losing market share as ChatGPT gains. User trends revealed through Google Explore show ChatGPT use patterns over the school year with almost identical peaks and valleys as Quizlet and Chegg. More than any other use, ChatGPT appears to be used as a highly effective cheating device for students.

From a business point of view, what we have are AI companies building user numbers by tempting students away from the work of studying for grades, then using these growth statistics to convince businesses that they need to keep up with the users of the future (students), then trying to convince our institutions that we need to make our students AI-ready for future jobs.

In the meantime, those students who use the tool often see their learning compromised as a result. To me, this is a situation like Big Tobacco, where the companies are trying to build business by creating unhealthy dependencies.

I agree that we need to engage with this problem rather than sit on the sidelines, but I don’t agree that the future of AI is either clear or inevitable. We should advocate to hold these companies responsible for the harm they are imposing on our education system.

Expand full comment
Still lighting learning fires's avatar

I agree that educators can't sit this one out. In fact, hiding our head in the sand is irresponsible when it comes to preparing the future that students face. I am suggesting that there is a very different way to deal with the issue -- I'm suggesting adopting a "Proficiency" orientation. Here's a link to the ]Substack I posted earlier this week. I'd love to have your feedback on the concept I'm not wedded to all the details, but I think the concept has merit as an alternative to the yes/no discussions that many seem to be having. https://twelchky.substack.com/p/ai-in-schools-a-call-for-a-new-kind?r=6jqjj

Expand full comment
5 more comments...

No posts