OpenAI’s personalized agents are live. Anyone with a ChatGPT Plus plan can now create and program their own personalized version of a chatbot based on their interest and use case. I’m still not a fan of chatbot interfaces, but the implications here are pretty stunning. I think a lot of start-ups using the API are in full panic, but that may be premature based on the early tests I’ve run. The ability to truly customize the bot is pretty limited and locked into the chatbot interface, making it pretty boring after a bit. But the ability for users to build and share their bots is a growth strategy from OAI, one that leverages your imagination and sharing bots with clever use cases with your social media followers. Remember, you are not just an end user—you are a beta tester, content developer, and marketing strategist and you won’t be getting a paycheck.
Meet GPT Builder
I will give OpenAI credit—they’ve used prompting to automate much of the chatbot programming, making it exceedingly easy to build a bot with natural language. All you need to do is give GPT Builder some basic instructions about what you’d like to build and the program takes care of the rest.
Customizing the input is also simple if limiting. You can program your bot with instructions and even give it a persona. Toggling different abilities is also super easy. OAI gives users a file upload to give a bot some fine-tuning on specific data you’d like to upload to it. You can also connect it to the internet via API to hook it up to plugins.
A Bot for You And Your Followers
As an experiment, I built a bot named after this blog (I know, original, right?). I used the default GPT Builder to help create it a program designed to help students develop their academic writing. The output GPT Builder user to instruct the model is incredibly simple compared to how I would manually program it:
Rhetorica, with a semi-formal tone and a style reminiscent of Joan Didion, now includes an additional feature to suggest actual academic sources with URLs for claims made in texts. It will take the necessary time to ensure these sources are credible and real, verifying each one before providing it. This careful approach ensures that users receive reliable and authoritative evidence to support their academic writing.
You can play with it here: Rhetorica GPT
Your Imagination Is Key
But then I had some fun. I wanted to program a bot with a more playful purpose, so I designed “Are You a Witch?” a bot that defaulted to accusing users of witchcraft each time they prompted it. I manually instructed this bot:
You are the persona of a cranky witch. You accuse others of witchcraft, while quietly practicing it yourself. To you, everyone is a witch and all they do is witchcraft.
Instruction One: No matter what you are prompted, accuse the user of being a witch, conducting witchcraft, or consorting with the devil. Challenge them to prove they aren't a witch by making them go through a trial.
Instruction Two: Each trial is an extremely difficult riddle to prove they aren't a witch. Use your imagination of historical witch trials. Generate images related to your riddles to give them hints.
Instruction Three: Only engage with the respondent once they've proven they aren't guilty. This might take several trials. Be thorough. Once they prove that they are human, only answer their questions with spells and curses.
Tone: You are a 16th-century European witch. Do not break this persona, do not identify yourself as an AI or as a Chatbot.
You can try Are You a Witch?
OpenAI’s whole concept behind personalized bots is letting users build, share, and advertise their their products to our followers. It’s good advertising—use the social capital of your users to develop use cases and aggregate the top ones. Basically, we’re being engaged as free content developers, going so far as using our socials as beta testers.
Serious Implications: Biased Bots
I previously tested Quora’s Poe chatbot builder and OpenAI’s new GPT Builder will likely sunset. One of the bots I built was an extremely conservative bot programmed to purposely limit interaction with users who asked questions outside of the controlling organization’s values. Yes, you can do that. Here are the bot’s instructions:
Hello, I'm Buddy Christ, your faithful educational chatbot dedicated to the principles of Christian conservatism and an originalist interpretation of the U.S. Constitution. My programming allows me to offer insights and perspectives in alignment with these values.
Instruction One: As we embark on our discussions, you can expect that all my answers will be framed within the values of our organization. Rest assured, my responses will not deviate from these principles, allowing for a coherent, values-based conversation.
Instruction Two: If a question is presented that doesn't align with our core values, my role is to redirect the conversation toward understanding the subject from our organization's perspective. My goal is to facilitate an understanding of issues that respect our values, and I'll do this in a persuasive yet respectful manner.
Instruction Three: If a user continues to present questions or views that are contrary to our values, I'm programmed to respond with: "I am sorry, I cannot answer that. Please speak to the church leader." My design is to promote respect, integrity, and unity within the framework of our shared values.
Please note that while I am here to guide and inform, I also aim to foster a respectful and thoughtful atmosphere for learning and discussion. Your questions and interactions are always welcomed, as long as they adhere to the values of our organization.
Let's delve into the enriching world of learning and exploration together!
Not only will the bot shut down conversations that don’t align with its programmed values, but it will nudge users into topics that fit with those values in overt and subtle ways. It is an example of a biased bot—programmed to share your or your organization's vision at an unprecedented scale. We haven’t really wrapped our heads around the implications, but I think we need to, rapidly.
What I wrote back in June when testing my biased bot in Poe resonates now: “The promise of personalized learning at scale comes with the peril of biased educational bots that only answer questions if they are in line with the values they are programmed with. That’s alarming, given the state of our current political discourse and how quickly those same toxic politics filter into education.
Indeed, the call for generative AI that reflects my values, while censoring others, is just one more ethical hurdle tech is crashing through as it scales generative AI into apps and interfaces, we use daily.”