Human Autonomy and AI
A very penetrating conversation between Johnathan Bi and Brendan McCord as they discuss a risk of AI often eclipsed by existential fears: the erosion of human autonomy. As folks increasingly use AI tools as “operating systems” for everyday decisions, from what to eat to what to say, the danger, they argued, isn’t extinction, but living life as a passive NPC .
The argument here is that true flourishing depends on preserving our capacity for self-direction and deliberation. Technology, they agreed, always gives with one hand and takes with the other; with AI, the risk is outsourcing not just tasks, but the very practice of practical reasoning that underpins autonomy. I do really like it when there is a sense of (philosophical) grounding around building products that aims to preempt 2nd, 3rd .. order effects, rather than just merely optimizing for cash money. There is a good book that I read recently about what happens when we ignore such approach, you can find my review here:
Can We Survive AI
This is the first part of a conversation with Eliezer Yudkowsky, and Nate Soares about their new book, "If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI." I believe the second part is behind a paywall; I haven't listened to it. But it's a very interesting conversation. It echoes many of the ideas in AI 2027, and you can find my review of it here:
One thing that they touched on is the emergent behavior of the LLMs that makes them, they say, near? impossible to align. Also, there is a really good layman explanation of how modern AI systems are built at the end of the pod.
Talking about failure of (narrow) alignment or the lack of it: Read story of a recent lawsuit with a parent alleging that ChatGPT encouraged their teenage son to take his life.
AI Job Transformation is Coming
Various AI news discussions, in conversation with Reid Hoffman.