When I first ran across hume.ai last year, I described it in a blog post as the generative AI equivalent of a bi-stable two-dimensional form, like the Rubin Vase or the duck/rabbit image made famous by Wittgenstein. It simultaneously appears to be the most impactful example of where foundation models may take us, and the most terrifying.
To answer your concluding question: one important line to draw is about the data it collects about users. Problems of privacy and changing norms about what we allow companies to collect and share have emerged as the thorniest of the internet era. In this moment of warranted panic about adolescent mental health, what will happen to the data your "study buddy" collects about your interactions? When it is not just your clicks and purchases being commoditized (you are the product) but data that supposedly reveals your emotional state and affective reactions that gets packaged and sold to...school districts? parents? hume.ai's business partners? beverage companies?
I havenβt used this AI, so Iβm not entirely familiar with how it interacts with users. One concern I can see is that if someone uses it for friendship, emotional support, or counseling, will it always be supportive of their actions and feelings? Will it consistently take their side? I can envision this creating a false reality for the person when they interact with real people. Might they believe that others should always be on their side, supporting all their actions, and never challenging or disagreeing with their way of thinking?
Technology is not evil. AI is not evil. Humans are evil, and we must work every day to rid ourselves of evil tendencies and inaccurate ethical biases. Adapting to the new normal will be good for you and your mind. Otherwise, you will have to move to the country and live in isolation or surrounded by few people and no technology if you want to live a happy life. These applications, along with robust robotics, LLMs, and other types of neural network architectures and multi-agent systems, will evolve into new types of intelligent beings that we will socialize and interact with on a daily basis.
I do not prioritize machines over people. I prioritize people over machines, of course. Hiding, complaining, and crying under the bed is not going to solve anything.
When I first ran across hume.ai last year, I described it in a blog post as the generative AI equivalent of a bi-stable two-dimensional form, like the Rubin Vase or the duck/rabbit image made famous by Wittgenstein. It simultaneously appears to be the most impactful example of where foundation models may take us, and the most terrifying.
To answer your concluding question: one important line to draw is about the data it collects about users. Problems of privacy and changing norms about what we allow companies to collect and share have emerged as the thorniest of the internet era. In this moment of warranted panic about adolescent mental health, what will happen to the data your "study buddy" collects about your interactions? When it is not just your clicks and purchases being commoditized (you are the product) but data that supposedly reveals your emotional state and affective reactions that gets packaged and sold to...school districts? parents? hume.ai's business partners? beverage companies?
I havenβt used this AI, so Iβm not entirely familiar with how it interacts with users. One concern I can see is that if someone uses it for friendship, emotional support, or counseling, will it always be supportive of their actions and feelings? Will it consistently take their side? I can envision this creating a false reality for the person when they interact with real people. Might they believe that others should always be on their side, supporting all their actions, and never challenging or disagreeing with their way of thinking?
Its just disasters all around us, constantly.
This is not so bad, to be honest. I think you have to adapt to this kind of technology or you are in for a dizzying and unpleasant (for you) ride.
I doubt that adapting to evil in any way beyond "rejection" will take us anywhere
Technology is not evil. AI is not evil. Humans are evil, and we must work every day to rid ourselves of evil tendencies and inaccurate ethical biases. Adapting to the new normal will be good for you and your mind. Otherwise, you will have to move to the country and live in isolation or surrounded by few people and no technology if you want to live a happy life. These applications, along with robust robotics, LLMs, and other types of neural network architectures and multi-agent systems, will evolve into new types of intelligent beings that we will socialize and interact with on a daily basis.
Most Americans agree on that we do not want this, either.
https://www.vox.com/future-perfect/2023/9/19/23879648/americans-artificial-general-intelligence-ai-policy-poll
AI is the automation of life itself, the anti-life equation. Ask if killing life is evil and figure out the rest.
As for you, the fact that you prioritize machines over humans says all that is needed for your understanding of evil.
I do not prioritize machines over people. I prioritize people over machines, of course. Hiding, complaining, and crying under the bed is not going to solve anything.