Talkie is a text-to-speech interface that really captures the Wild West moment we’re in with multimodal AI. You can chat with a bland conversational AI about school work, or click on an AI version of P.Diddy and ask him how prison is going. Want a dose of Trump talking MAGA or Harris speaking about the election? Talkie has them as well, along with a massive number of porn bots that are programmed to serve your sexual fantasy. And you can interact with many of these synthetic voices for free. You don’t even have to sign in. Welcome to AI without constraints.
Conversational AI has use cases that are valid in education. People are using this technology as we speak to acquire language proficiency across the Global South. Being able to converse with a bot isn’t a replacement for human interaction, but those types of issues pale in comparison to those who need resources. However, this idealistic view of the technology overlooks a crucial concern—the inherent risks of deploying AI systems that can convincingly mimic human conversation and generate any content, without fully considering the broader societal implications.
Talkie’s overall interface is simple enough to use, but the bots aren’t inspiring or technically impressive. My guess is the AI voice model they’re using is pretty basic compared to the foundation models and they’re relying on the public to program personas. It also limits you to snippets of dialogue—freemium ware at its finest. You’re allowed a certain number of interactions before the system asks you to sign up.
Generative AI can mimic basically anything, or in this case, anyone. We were naive to think the generic and robotic text spit out by ChatGPT would be the limit of the technology. The thing is, Talkie isn’t even the worst site out there. There are hardcore porn sites and ones that let you swap faces using AI to create near-instant deepfakes of anyone. These tools are being used much as you suspect—to harass young women in truly horrific ways.
What makes Talkie unique is its lack of sign-on. Simply click a link and enter your text and you’re chatting away with the digital replicant of a teacher, interview coach, and yes, porn star. Such is the world we live in right now.
The Danger of Unregulated Multimodal AI
A young man ended his life a few weeks ago after becoming enamored with a chatbot on Character.AI programmed with the persona of Daenerys Targaryen from Game of Thrones. In many ways, Talkie is simply a more risque version of the site, deployed with fewer safeguards—all designed to gather clicks and users. It likely won’t exist a year or two from now. 90% of these small AI apps will disappear, but there’s no way to gauge how much damage they’ll do before going dark.
In an age of conservative book bans at public libraries and state-mandated age verification on adult websites, it is surprising how easily it is for these sites to slip under the public radar. Society definitely has a blind spot with multimodal voice models. I’m not sure how many more tragedies it will take to make people truly realize this technology really is what you want it to be—warts and all.
What we’re seeing in the development space is a sadly typical pattern of reckless deployments that ask users to come up with their own responsible use cases instead of the creators who build and monetize these platforms. The public shouldn’t be the ones left figuring out how best to use this technology safely. What’s worse, many in the development space dismiss these issues and file them under the umbrella of near-term harm.
The e/acc movement (shorthand for effective accelerationism) believes AI will be the best thing ever for humanity. They’re opposed by AI “Doomers” who think AI will eventually become so powerful it will threaten human existence. Both of these philosophies dominate the current AI development space, leveling little room for anyone to raise concerns about what unregulated AI is doing to an entire generation of users now. Many of them are students, and guidance often isn’t timely
Guidance for Writing With ChatGPT Arrives Two Years Too Late
OpenAI recently released A Student’s Guide to Writing with ChatGPT nearly two years after the launch of ChatGPT. Two years of chaos. Two years of faulty AI detection. Two years of students being falsely accused. Two years of faculty facing the existential question of what it means to teach. Two years of some faculty giving up and quitting.
OpenAI was well aware from the beginning that students made up the majority of their user base. They also understood that many students were primarily using their tool to avoid genuine learning experiences. Despite this knowledge, for two years they avoided confronting a serious issue—they had released a tool that any student over 13 could freely access and use, causing widespread disruption in education systems. While OpenAI emphasized how AI would benefit underserved and poorly resourced communities, they paid little attention to how their product was eroding trust throughout educational institutions, both in the United States and internationally.
A Student’s Guide to Writing with ChatGPT is a starting point that arrived too late to impact meaningful change for how the majority of students use ChatGPT. That ship has long since sailed. What’s more, the calls for transparency at the end of the document anthropomorphize AI by equating the tool with thinking:
Be transparent—cite your conversations.
One last point: When you use ChatGPT to deepen your understanding, develop your ideas, or come to insights you might not otherwise have had, it should fall within the bounds of acceptable academic practices. But since ChatGPT can also be used in unethical ways, your professors will likely feel more comfortable if they can see exactly how it’s contributing to your thinking.
Part of academic work is being transparent about your sources. That’s why universities emphasize the importance of proper citations, making sure you acknowledge the thinkers who’ve shaped your understanding.
Similarly, it’s important to be open about how you use ChatGPT. The simplest way to do this is to generate shareable links(opens in a new window) and include them in your bibliography(opens in a new window). By proactively giving your professors a way to audit your use of AI, you signal your commitment to academic integrity and demonstrate that you’re using it not as a shortcut to avoid doing the work, but as a tool to support your learning.
While I wholeheartedly agree that open disclosure matters when using AI, there is a danger in treating ChatGPT as a “thinker.” AI cannot think nor can it reason in any way akin to a human being and we should recognize this as marketing and little more.
With hundreds of millions of dollars being poured into threat matrix calculations about AI and existential risk, and the countless billions more being dumped into AI development, where is the money to address the near-term harms of unregulated AI on vulnerable users? Obviously, there are differences between interfaces like Talkie and ChatGPT, but neither takes into account how young people actually use their products and fail to acknowledge any substantive criticism regarding the immense cultural impact this technology is having on society.
This document does two things I find problematic.
Chat's big value is helping workers who want to remove repetition or cognitive labor from ongoing tasks that require their higher order thinking skills but might be pattern-based and thus supplemented by an inference machine. I use it a lot for that, and shaving chunks of time off tasks is a game changer.
Education is collaborative, slow and cognitively demanding. It requires reflection, and benefits greatly from interpersonal relations, and memory gains weight from emotionally valued experiences, best done with others.
Unfortunately, this guide posits the student work experience as something to be sped up and outsourced. "Delegate citation grunt work to ChatGPT" is the very first sentence. The tone is immediately one of "your learning tasks are laborious, and beneath you." You are being denied the fun stuff in learning, because your instructors are meanies.
Also, NONE of this encourages students to interact with other people. We know learning is a social activity, done with others and this student guide suggests replacing peers and an instructor. Other people are totally and intentionally removed from each of these steps. And that is counter to millennia of human learning patterns and educational research.
Instead of releasing a guide, OpenAI could have easily shipped a new model selector (GPT-4o for students) which would literally just be the existing model with a student-specific system prompt. This doc feels more like PR than any substantive move to improve student usage of LLMs. OpenAI completely controls the interface, they don't need to rely on casual suggestions on a totally separate web page that students will likely never see.