Agency and AI have become a focal point in education around the current batch of generative tools, but the focus on personal use and autonomy (e.g., to generate or not generate) profoundly misunderstands the frankly shocking impact autonomous agents pose to learning.
Very powerful piece. Well written—the pace and vivid scenarios are riveting. Man, I hope it doesn’t roll that way, but everything you say seems plausible to me. Wow.
What's so interesting about the scenario Marc describes is that it sounds like some sci-fi story but we already have the technology to accomplish this provided there was the political will to implement it (a big if). Courts have been using AI to predict recidivism for a while - well before the breakout of ChatGPT - with mixed results and major examples of bias.
Whether or not those issues can be fixed is anyone's guess, but applying Generative, Predictive, and Strategic AI to students is one fascinating example of how utopian tech entrepreneurs think of the future but those of us in education, while recognizing the potential of such a system, immediately could tear apart the issues involved with tracking students in this fashion. But what jumped out at me most in this post is Marc's reminder about how much personal information we already give up to algorithms through our phones, online shopping, internet browsing, and every other way we live digitally. Most of us know it but never really think about it. All those terms of service contracts basically mean these companies can use our information in ways most of us would never imagine. And all of this underscores what to me is at least one of the major themes of Marc's work that the gap between how educators (and really, the average citizen) understand the implications of generative AI versus what is just around the corner is a huge problem in almost every respect.
Very powerful piece. Well written—the pace and vivid scenarios are riveting. Man, I hope it doesn’t roll that way, but everything you say seems plausible to me. Wow.
What's so interesting about the scenario Marc describes is that it sounds like some sci-fi story but we already have the technology to accomplish this provided there was the political will to implement it (a big if). Courts have been using AI to predict recidivism for a while - well before the breakout of ChatGPT - with mixed results and major examples of bias.
(https://www.lawyersalliance.com.au/opinion/predicting-recidivism-the-questionable-role-of-algorithms-in-the-criminal-justice-system)
Whether or not those issues can be fixed is anyone's guess, but applying Generative, Predictive, and Strategic AI to students is one fascinating example of how utopian tech entrepreneurs think of the future but those of us in education, while recognizing the potential of such a system, immediately could tear apart the issues involved with tracking students in this fashion. But what jumped out at me most in this post is Marc's reminder about how much personal information we already give up to algorithms through our phones, online shopping, internet browsing, and every other way we live digitally. Most of us know it but never really think about it. All those terms of service contracts basically mean these companies can use our information in ways most of us would never imagine. And all of this underscores what to me is at least one of the major themes of Marc's work that the gap between how educators (and really, the average citizen) understand the implications of generative AI versus what is just around the corner is a huge problem in almost every respect.
Thank you. I wasn’t aware of AI and recidivism. I hope you’re right that these proclivities will be nipped in the bud legally.