We keep breaking new ground in AI capabilities, and there seems little interest in asking if we should build the next model to be more life-like. You can now go to Hume.AI and have a conversation with an Empathetic Voice Interface. EVI is groundbreaking and extremely unnerving, but it is no more capable of genuine empathy than your toaster oven. EVI is simply an AI tuned to perform what emotions its prediction suggests a user wants from an interaction. It can play the role of tutor, therapist, romantic partner, or even a priest, but it will always be just that—a performance. What do we risk when we allow synthetic experiences as stand-ins for real emotional connection? Worse yet, how many people will accept this performance and believe it is the
When I first ran across hume.ai last year, I described it in a blog post as the generative AI equivalent of a bi-stable two-dimensional form, like the Rubin Vase or the duck/rabbit image made famous by Wittgenstein. It simultaneously appears to be the most impactful example of where foundation models may take us, and the most terrifying.
To answer your concluding question: one important line to draw is about the data it collects about users. Problems of privacy and changing norms about what we allow companies to collect and share have emerged as the thorniest of the internet era. In this moment of warranted panic about adolescent mental health, what will happen to the data your "study buddy" collects about your interactions? When it is not just your clicks and purchases being commoditized (you are the product) but data that supposedly reveals your emotional state and affective reactions that gets packaged and sold to...school districts? parents? hume.ai's business partners? beverage companies?
I haven’t used this AI, so I’m not entirely familiar with how it interacts with users. One concern I can see is that if someone uses it for friendship, emotional support, or counseling, will it always be supportive of their actions and feelings? Will it consistently take their side? I can envision this creating a false reality for the person when they interact with real people. Might they believe that others should always be on their side, supporting all their actions, and never challenging or disagreeing with their way of thinking?
When I first ran across hume.ai last year, I described it in a blog post as the generative AI equivalent of a bi-stable two-dimensional form, like the Rubin Vase or the duck/rabbit image made famous by Wittgenstein. It simultaneously appears to be the most impactful example of where foundation models may take us, and the most terrifying.
To answer your concluding question: one important line to draw is about the data it collects about users. Problems of privacy and changing norms about what we allow companies to collect and share have emerged as the thorniest of the internet era. In this moment of warranted panic about adolescent mental health, what will happen to the data your "study buddy" collects about your interactions? When it is not just your clicks and purchases being commoditized (you are the product) but data that supposedly reveals your emotional state and affective reactions that gets packaged and sold to...school districts? parents? hume.ai's business partners? beverage companies?
I haven’t used this AI, so I’m not entirely familiar with how it interacts with users. One concern I can see is that if someone uses it for friendship, emotional support, or counseling, will it always be supportive of their actions and feelings? Will it consistently take their side? I can envision this creating a false reality for the person when they interact with real people. Might they believe that others should always be on their side, supporting all their actions, and never challenging or disagreeing with their way of thinking?
Its just disasters all around us, constantly.