Discussion about this post

User's avatar
Craig Van Slyke's avatar

Thanks for a thought provoking article. The way we anthropomorphize AI (and tech in general) has some serious implications. I still struggle with my own mental models about AI and I suspect I'm not alone.

Expand full comment
Rob Nelson's avatar

Great piece, Marc. It affirms my sense that we need analogies and metaphors different from those of the human mind. The initial enthusiastic response to the new models always imagines the gap between what the machine can do (quite impressive in this case!) and what a human would do as slight. It feels close if you imagine the machine is thinking. How hard can it be to have the model's outputs agree with reality or truth?

The answer is that it is quite hard. The model does not think in a way that fits the analogy of the human mind's processes. It does not have an understanding of reality or truth outside the vectors it uses to manipulate language. Layering on post-training to get it to answer questions more reliably is not going to fix its fundamental nature, which is grounded in probabilistic mathematics applied to language, not a human understanding of reality.

Expand full comment
17 more comments...

No posts