OpenAI’s SearchGPT is being slowly rolled out and it poses significant challenges to existing tools like Perplexity and Google's AI-generated search. However, OpenAI's latest tool feels more like a slightly more functional version of Microsoft's Copilot than a great leap forward.
The Fast Fashion of Information
Using AI search feels akin to fast fashion—turning information into a needlessly expensive commodity that is discarded just as quickly as it was created. SearchGPT is a bit of a puzzle at this point. It isn’t actually about searching for information. Instead, SearchGPT constructs a miniature summary—a personalized Wikipedia entry just for you. Many of these outputs move beyond neutral-sounding bland summaries and take positions on the accuracy and validity of the claims found within the articles. This is both SearchGPT’s strength over traditional search and its ultimate weakness.
SearchGPT uses speed to create a response but ultimately represents a wasted opportunity to build lasting knowledge. The Wiki-like stub generated about a topic of interest fails to build personal or public knowledge in two crucial ways:
It doesn't become a building block in a personalized library showcasing your interests or learning from you by curating responses that speak to those interests.
It doesn't add to public knowledge by becoming part of a searchable collective like Wikipedia.
Trusting AI to Read for You
The impact of AI-generated search on critical thinking skills, particularly for students, is completely unknown. But I have some reasons for concern. For instance, when I prompted SearchGPT with "Is China's Xi an autocrat," the model gave me a summary that mimic’s taking a stance on the information, using the arguments form the sources it includes.
I don’t disagree with that summary, but I want emerging readers to be able to explore the arguments made within these linked sources and establish their own interpretation. Isn’t that a skill we’d like to help students establish and not simply offload?
More pointedly, what happens when a country like China programs an AI model to give a very different response to that query? We’ve become accustomed to search being synonymous with truth and accuracy and haven’t spent much time thinking about how messy the existing algorithms make our digital worlds.
AI Should Do More
Yes, there’s utility within SearchGPT and some will certainly find it useful for their day-to-day tasks, but for a feature to replace search, it needs to go beyond improving the basic experience. Maybe I’m spoiled, but I want AI to show me things that go beyond search, things that give me pause and cause my jaw to drop. SearchGPT isn’t that.
Something like NOMIC’s Atlas is the flavor of awe-inspiring AI search I am drawn to. Atlas lets users upload massive amounts of unstructured data and explore it using different types of machine learning. The experience is like slowly zooming in from a God’s eye view of a galaxy. There are patterns in data so vast a human being could never extrapolate without AI assistance.
You can sign up for NOMIC’s Atlas for free and take a journey through sample datasets from social media to visualize trends and behaviors that represent how we act online.
Generative AI should be weird and explore use cases we simply cannot do with existing tools. Otherwise, what’s the point? Atlas uses AI to search in ways that feel profound, not simply like an upgrade to traditional search.
SearchGPT will likely destroy many of the dozens of bespoke AI wrapper apps, but it won’t replace Google search and I doubt that Wikipedia should be fearful. Google’s own foray into generative search has been met with scathing criticism. It’s inaccurate and often creates an aggregate summary that makes searching for simple answers a murky exercise, when all a user wants is clarity.
Generative search complicates our relationship with digital information in ways we may not fully understand. All the things we associate with existing searches will change—some subtly, others profoundly. Digital information has never been a neutral act, and having AI present information and include synthesized positions on complex and controversial topics means users will be left trusting a machine's prediction about what it was programmed to determine as correct.
Does Speed Matter More Than Comprehension?
Everything is moving so fast in AI land that asking for things to slow down might be a lost cause. If generative search becomes the norm, we’ll all get information faster but that doesn’t necessarily mean we’ll understand it any more clearly, read it more deeply, or comprehend it better. When AI shapes knowledge by taking positions on topics, we cede just a little bit more of our thinking to something that we do not fully understand. That should give us all some pause.
Some Recent Interviews and Podcasts
We’re half way through August and I cannot believe I’m only days away from the start of the fall semester. I had the wonderful opportunity to write for the Chronicle of Higher Education about the need for normalizing AI usage through open disclosure. I also had the wonderful opportunity to sit down with John and Rebecca from the Tea for Teaching podcast to talk about The Beyond ChatGPT series.
Chronicle of Higher Education-- Why We Should Normalize Open Disclosure of AI Use
Tea for Teaching Podcast—Beyond ChatGPT
In the spirit of full disclosure, I offer the following label to make my AI usage clear in this post:
AI Generated Statement: I used Grammarly’s AI tools to improve the grammar and sentence mechanics of this post. This involved using Grammarly’s tools to check for errors and consider suggestions for syntax and style. I also used Anthropic’s Claude 3.5 to help generate title ideas. All other ideas within the post are my own.
It sounds like knowledge reduced to a disposable consumable. For some reason, it reminds me of the "Spock's Brain" episode of Star Trek, where an advanced computer gave people temporary knowledge and skills to complete tasks, then erased them.