This post is the fifth in the Beyond ChatGPT series about generative AI’s impact on learning. In the previous posts, I discussed how generative AI has moved beyond text generation and is starting to impact critical skills like reading, note-taking, and impacting relationships by automating tutoring and feedback. This post examines AI’s impact on research. The goal of this series is to explore AI beyond ChatGPT and consider how this emerging technology transforms not simply writing, but many of the core skills we associate with learning. Educators must shift our discourse away from ChatGPT’s disruption of assessments and begin to grapple with what generative AI means for teaching and learning.
Beyond ChatGPT Series
Note Taking: AI’s Promise to Pay Attention for You
When A Tool Reads Sources For You
AI’s impact on research touches so many of the skills previously discussed within this series: reading, writing, and critical note-taking, and neatly automates them all into an often frictionless experience. The time-saving, labor-reducing, automation of research is pitched as democratizing access to greater amounts of information in ways human beings cannot begin to comprehend. And to be sure using AI for breakthrough research like AlphaFold’s protein mapping promises to give science a greater chance at treating chronic illnesses.
For all of these wonderful and impactful breakthroughs, most users interact with AI research as a stand-in for search in digital domains. Tools like Perplexity, Elicit, and Consensus use Retrieval Augmented Generation to pull information from dozens if not hundreds of sources and synthesize the information in tidy summarized chunks for a user to explore. Used intentionally, AI research can help students navigate the increasing array of information online. But as with so many skills, there’s how AI “could be used” v. how it is actively being marketed to students.
This matters more than ever before because OpenAI is releasing customized GPTs to free users, including bespoke research plugins, like Consensus, Scholar.AI, along with dozens of other research specific tools. Students will simply select their plugin of choice right from ChatGPT’s interface, removing yet another barrier between a user and a specific AI tool. Complicating matters further, OpenAI is also launching a specific education tier aimed at colleges and universities, promising greater bandwidth and access to their AI for students during peak times.
How Students Are Being Sold AI Research
The sales pitch of letting AI research for you from countless influencers joins the chorus of social media marketing using AI as a shortcut in the learning process. When we offload the task of inquiry, we put our trust in opaque systems that cannot be audited to explain how it arrived at a response. This should cause all of us to pause and consider the downstream impact automating research has on critical thinking and human reasoning.
Enable 3rd party cookies or use another browser
Enable 3rd party cookies or use another browser
How Can We Research Without Reading?
Beth McMurtrie’s recent essay in the Chronicle “Is This the End of Reading?” goes into astonishing depth about how technology and cultural attitudes outside of generative AI may have already impacted student’s reading and attention spans. As McMurtrie observes about student engagement with reading: “A lack of faith in their academic abilities leads some students to freeze and avoid doing the work altogether. And a significant number of those who do the work seem unable to analyze complex or lengthy texts. Their limited experience with reading also means they don’t have the context to understand certain arguments or points of view.”
Dependance on technology may be robbing students of their ability to read closely and absorb information—two crucial skills needed for researching ideas and arguments that contain more complexity than a simple tweet can convey. Quoting Theresa MacPhail, McMurtrie notes:
“Most students still weren’t doing the reading. And when they were, more and more struggled to understand it. Some simply gave up. Their distraction levels went “through the roof,” MacPhail said. They had trouble following her instructions. And sometimes, students said her expectations — such as writing a final research paper with at least 25 sources — were unreasonable.”
I’ve taught first-year writing for over a decade and I, too, can attest to the changes in student’s ability to process complex information. I’ve blamed the complex language authors used and tried to find sources that were free of academic jargon; let the number of sources slide for research essays because of how difficult it was for some students to navigate academic databases; and even blamed how challenging it was for students to read on screens or how expensive it was for them to print out copies of articles and bring to class.
I focused on removing as much friction as possible from their research experience, completely oblivious to how doing so may have robbed them of valuable time on task to explore skills we simply assume students arrive at university with. Now, generative AI may accelerate that process by removing certain elements of critical thinking, synthesis, and even reasoning from the process.
Navigating the AI Research Landscape
Students and educators have been using digital search for over three decades. Institutional libraries have hefty budgets for academic database subscriptions that rack up into the millions. Physical card catalogs have long since been replaced with upgraded technology. Many librarians work as advanced knowledge brokers, helping faculty and students explore data and its increasing impact on all of our lives. AI is just another tool and we have an imperative to learn how to use it for skills associated with academic research.
Where I grow concerned though is when a technology like generative AI does more than locate sources for a user and starts creeping into the human skills we associate with synthesis. Maybe the most important skill a researcher has is the synthesis of information from one author to the next, putting them into conversation with their ideas about a topic. If we offload that process entirely to AI, a user may gain time by automating their research to AI, but at the extraordinary cost of their human skills of reasoning and sense-making.
Customized GPTs Will Change How Students Research
OpenAI’s recent announcement giving free users access to the custom GPTs companies and other users created means students will be given increased access to some incredibly powerful research tools, all through the ChatGPT interface. Perhaps the most powerful tool is Consensus.ai’s custom GPT plugin. Now all a user needs to do is use Consensus in ChatGPT and ask for sources for a topic, then integrate those sources into a coherent response.
Of course, there are some dead giveaways that this was AI-generated. The use of transitions at the opening of each paragraph is one. So is the lack of any errors or voice, but those are increasingly easy for users to engineer with some simple tricks that are unsurprisingly all over social media
Enable 3rd party cookies or use another browser
Research Requires Friction
AI research assistants are becoming increasingly sophisticated and efficient in automating parts of the research process, and allowing users free access to customized versions of research tools will only accelerate student adoption. Once again, we’re faced with the question of what we value in education, what degree of productive struggle do we believe is crucial for developing critical thinking?
Generative AI offers users a seamless research experience with integrated source retrieval and synthesis and this may improve student engagement by lowering barriers to entry. However, the human act of manually sifting through physical and digital sources, taking notes, sense-making, identifying key ideas and gaps, and then constructing original arguments from the various pieces may be lost. And once it is gone, I highly doubt that we will see its return. Too much automation in learning risks atrophying these uniquely human skills central to higher-order thinking.
It is increasingly clear to me that ethical AI use in research for students and faculty will require transparency in how the technology was used, while also requiring a level of restraint. When we allow a tool to be used to offload cognitive labor, we also undermine the intent behind assigning that work in the first place. The grand challenge we face may be navigating this journey with students. I’d love to believe that there are aspects of the research process too rich for full automation, but that’s likely not going to be the case with freely available Large Multimodal Models.
Thanks for this series. As a college English teacher, I have been watching the steady increase of students using AI in my own courses and have been trying to navigate these changes. I'm wondering, though, how much agency we as instructors really have in controlling our students' use of these tools. Soon AI will be embedded in programs like Word, giving it a legitimacy that won't require any thoughtfulness about its use (remember how people worried that the spellcheck tool would hurt students' spelling, and now no one would consider not using it?). Many students already don't value the process of learning, so they will choose AI regardless of whatever conversations we have about the ethics of it. And many more will choose AI because it saves time that they could use for other projects that interest them more.
Because of this, I've been rethinking my entire approach to teaching, and now I want to include more experiential learning in my courses. I'm not sure what this will look like yet, but I do feel like the traditional research essay can no longer be the measure of student comprehension. If AI can do it for us, we need to be asking what other ways are there to teach and measure critical thinking? I don't have the answers yet, but I appreciate your posts as they are helping me think through what's at stake in my classroom going forward.
I'm concerned that poorly designed research AI will 'help' students solidify their biases, rather than telling them their thesis does not represent the scholarly consensus. One of the services I provide as a librarian is gently suggesting that it may be easier to complete their research if they use the sources that exist; will AI do that, or will it locate low-quality sources to match their initial assumptions?
Based on McMurtrie's article and my own observations I'm not at all convinced students will end up with AI-researched papers that they understand any better than the sources they skipped reading. If it could - if it helped them develop the necessary vocabulary and background, and quizzed them on the sources and the connections it made between them, such that they could present the paper and correctly answer questions about it - well, that would be great.