9 Comments
Commenting has been turned off for this post
Mar 29·edited Mar 29Liked by Marc Watkins

When I first ran across hume.ai last year, I described it in a blog post as the generative AI equivalent of a bi-stable two-dimensional form, like the Rubin Vase or the duck/rabbit image made famous by Wittgenstein. It simultaneously appears to be the most impactful example of where foundation models may take us, and the most terrifying.

To answer your concluding question: one important line to draw is about the data it collects about users. Problems of privacy and changing norms about what we allow companies to collect and share have emerged as the thorniest of the internet era. In this moment of warranted panic about adolescent mental health, what will happen to the data your "study buddy" collects about your interactions? When it is not just your clicks and purchases being commoditized (you are the product) but data that supposedly reveals your emotional state and affective reactions that gets packaged and sold to...school districts? parents? hume.ai's business partners? beverage companies?

Expand full comment
Mar 29Liked by Marc Watkins

I haven’t used this AI, so I’m not entirely familiar with how it interacts with users. One concern I can see is that if someone uses it for friendship, emotional support, or counseling, will it always be supportive of their actions and feelings? Will it consistently take their side? I can envision this creating a false reality for the person when they interact with real people. Might they believe that others should always be on their side, supporting all their actions, and never challenging or disagreeing with their way of thinking?

Expand full comment

Its just disasters all around us, constantly.

Expand full comment