Generative tools are increasingly impacting how we access and view information in our digital spaces. As Alison Gopnik argues, “these models are “cultural technologies” like writing, print, pictures, libraries, internet search engines, and Wikipedia.” Cultural technologies increase our access to information but also transform how we view technology in our daily lives. Put simply, generative AI is changing how we think and increasing feel about our world. Newer multimodal models allow us to access areas of data about the world and ourselves that we’ve never been able to before at a scale only possible because of machine learning.
This isn’t necessarily a desirable outcome, as John Warner writes in "Grief Tech" Wants to Eradicate the Pain of Loss. We’re increasingly seeing use cases for this technology beyond assistance in task completion, with little regard to what we might lose in the process. John’s thoughts on grief tech are worth noting:
I am worried about the apparent desire of some, perhaps many humans to avoid the experiences of being human in an effort to achieve a kind of perfection of existence. I’m thinking about the bio-hackers like Bryan Johnson who thinks he can cheat death by turning his life into an algorithm. Johnson believes he can make death “optional.” I’m thinking about the people who believe a generative AI bot can substitute for your therapist, your romantic partner, or your teacher.
Believing in these things is to willingly give oneself over to a delusion. I worry because humans have long demonstrated a penchant for delusion and this technology offers up these delusions “at scale,” as a tech person might put it.
Instead of viewing generative AI like a calculator or other similar tool, we should be shifting the discourse to how this technology is shaping our culture. The very notion of using a machine to solve the human condition of grief and pain should warrant some deeper conversation in academia and broader society.
Speaking Robots
Generative models now easily allow users to create multimodal experiences based on their data. Many in education have been talking about NotebookLM’s audio overview feature that lets a user generate a podcast based on nearly anything they upload.
Hearing two synthetic voices talk with interest about the source you upload changes the experience of receiving that information. As one of my Digital Media Studies students noted in a recent reflection after uploading her personal notes into NotebookLM and generating a podcast. She folded her laundry and cooked dinner while it played in the background and reported it helped her study more deeply than simply reviewing the notes on her own. That update changed her habits. Instead of sitting down and pouring over her notes, she allowed the AI to talk about them with interest while she did her chores around the house. She saved time, yes, but she also changed her experience.
Listening to your own data is one thing, having OpenAI’s newly launched voice feature use a synthetic voice that can change its tone and style based on detecting your emotions is something else. Google’s Gemini Live and Meta’s AI voice tools all do something similar. For me, it all comes down to a question of how artificial voices will be used to not simply provide information, but tap into your emotions to persuade you.
I think we can talk about the hype of AI being “superhuman” at tasks and easily debunk it for many use cases, but launching an AI app at the scale that is programmed to scan and respond to a person’s emotions makes a shockingly convincing case for AI’s ability to conduct superhuman persuasion across our culture.
AI’s Impact on Other Cultural Technologies
Google’s AI Overview feature isn’t exactly popular among users, but good or bad, this new method has fundamentally altered a previous cultural technology—search.
How deeply has generative AI disrupted traditional research is hard to say. Generative tools pop up all over the place, so keeping track of the seemingly endless waves of new models that can impact research or the third-party tools that create wrappers using a big AI firm’s API. So instead of listing a dizzying array of tools, let’s take a moment to talk about how AI is being used as a technique to augment, or in some cases, replace search. Keep in mind, ChatGPT was only publicly released 22 months ago!
1). Where We Started: Rely on the AI Model to Generate Information
When ChatGPT was first released back in November of 2022, many people were shocked that it would generate sources for you once you asked, and were dismayed to find how often these were entirely fabricated. Relying on a foundation model to produce information without RAG to ground its search results proved disappointing. A few weeks earlier in November of 2022, Meta released an AI model called Galactica that was designed to help academics write research reports. Meta ended up pulling the model in less than three days because the model had few safeguards and would hallucinate. My favorite is a researcher who used Galactica to generate a report about the benefits of eating ground glass on the digestive process!
While this method has mostly fallen out of favor, there are still a number of models on the market, like Anthropic’s Claude, that don’t use RAG when coming up with sources. One interesting project is located in Google Labs and it is called Learn About. What makes Learn About so interesting is Google trained a series of LLMs to be used specifically in educational settings. This means Learn About generates its response solely from its training data.
2). Where We Are Now: Ask the Model to Find Real Sources for You
OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini, along with dozens of other tools, like Perplexity, Consensus, and Elicit use generative AI to help users search various databases. For foundation models like ChatGPT and Gemini, this means being connected directly to the web and searching the internet, using dozens of vectors, each working to compile generated summaries. Most of these include links back to the original sources but don’t go as in-depth giving you highlighted sentences where the information came from. Elicit uses an API to search academic databases, relying on RAG to compile summaries, and even synthesize information.
This method isn’t as popular with users because it often screws up results. The public’s reaction to Google AI’s overview each time you search is a pretty solid indication of how inaccurate the results can be. From an educational standpoint, you can still ask students to track down the sources and teach parallel reading strategies, so it is possible to use these features as a technique to help students explore credibility.
3). Where We are Headed: Bring Your Own Data
Google’s NotebookLM and Nomic’s Atlas only work if a user brings their own data to their interfaces. The tools then use a process called retrieval augmented generation to create different types of vector embeddings and mappings of the material you upload. In non-techno jargon—the AI summarizes the source and lets you use the model to ‘talk to your data.’ Since the search you conduct is grounded in your research, there’s less risk of running afoul of hallucinations. Note, less does not equal none! RAG also isn’t equally implemented across foundation models. Some do it better than others.
In terms of using AI as a search aid, this method is probably the most thoughtful and offers a user the opportunity to silo their data (as much as that’s possible) and be able to track back how the AI system pulled its information.
AI is making search more complicated and these are just three of the main techniques generative tools are used in research. We’ll likely see more come and go. The challenge for education will be to navigate these techniques and help students understand what affordances using AI offers them, but also what this means in terms of talking about it as a cultural technology.
Understanding the Landscape Isn’t Advocating Adoption
While I’m sure AI can help users with certain tasks, I’m not so sure we’ve had enough time to come to terms with how generative AI as cultural technology poses for tasks, like reading and research. We don’t have a great track record in this regard. Many of us grew up when Wikipedia arrived and faced reactionary responses from certain segments of education that had a lasting impact on how Wikipedia was viewed as a source.
To say our relationship is complicated with using AI is a generous understatement! We don’t need these endless waves of updates and third-party tools that are sold to students as time savers. Even so, if you get past the hype, and the base marketing, then I think you’ll discover some solid areas where using AI as a research assistant can help students explore and process information. But is that really a good thing?
If I had my wish, we’d get a long-needed pause in AI deployments so that we could start to reframe this discussion around AI as a cultural technology and not as simply a one-off tool used by students to cheat. As educators, our challenge is figuring out ways that help students by giving them guidance and support, while also being sure we can establish frameworks that preserve much of the needed skills around close reading and critical thinking we fear the uncritical adoption of these tools by students will rob them of.
So glad to see you promote what Henry Farrell calls Gopnikism. It seems particularly important for educators to use this frame to make sense of AI given the cultural nature of learning. We need to think beyond optimization and efficiency as reasons to adopt AI if we are to discover its educational value.
Why is it that there is no direct and logical way to understand how our social system of macroeconomics really works, after I provided a most simple, logical and minimal way for its comprehensive modelling in SSRN 2865571?