5 Comments

Tbh, I wish AI would go back to the hell it came from. As a writer and a human being, it has only been a source of pain, even before the risk it kills us all.

Expand full comment

The answer to the question "What do I want?", most of us know, is "Not what I think I want". Many corporations use that point cynically--Substack's executives have used it to explain why they don't want strong content moderation. (e.g., that you cannot find what you really want if there is a governance system only providing you what you think you want). It turns out that the answer is also not "Exactly the opposite of what I think I want" or "What corporations think I should want" or "What some specific group of political thinkers have modelled me as wanting".

The deeper problem behind the question is "how did we arrive in a sociopolitical norm where 'what do I want'? is a determinatively important question, and where the thought that I am not getting what I want is a problem that needs to be solved?" Because for many past human societies, it isn't even a question they would have thought to ask in those terms, let alone try to resolve it the ways we have imagined.

An AI that was a 'strange attractor', that could think of the things we want that we can't name or imagine for ourselves, would be interesting, even if it was inhuman or ahuman. That is not how we think of AI. What we hope for in other people is to be understood just enough that we discover satisfaction, possibility, hope, aspiration, knowledge, wisdom, that we need and deserve and thus that we wanted without fully knowing that we did. We often especially hope for that from teachers--that they satisfy a condition of incompleteness that we couldn't have named or described before it was fulfilled. That's what might get 'dehumanized' by the idea of an AI as a satisfaction engine, as a mirror of the desires we can articulate fully already. If AI is at all useful, it would be as one more 'helpful stranger'; if that is possible, tech capitalism is fully incapable of thinking of it except as an alibi for decisions they've reached for other reasons.

Expand full comment

Great piece! The connection between porn deepfakes of Taylor Swift and the AI teacher avatar seems to be dehumanization. We are being encouraged to see everything as mediated through increasing layers of technology while being reduced to our value as data and data consumers. But that dehumanization is not evenly spread. Many of the longtermists who run Silicon Valley are very interested in the human – just a very small sliver of very particular ones.

Expand full comment

Thanks, Katie! I agree about the porn aspect. I really dunno what the pathway forward is regulating that--too many open models that any actor can modify and refine on their own. It really sucks that certain folks in power view this as a mere 'near-term harm' and thus don't give the consequence much thought.

Expand full comment

Me neither. And yeah, it seems most of the many harms that have been identified over the past three years in particular have been waved away as insignificant.

Expand full comment