reSee.it Podcast Summary
In this episode of Doom Debates, Noah Smith explains a significant shift in his thinking about AI doom. He describes moving from focusing on long-term, superintelligent god-like AI to recognizing that more proximate and actionable threats—such as rogue AI agents and biothreats—could pose substantial risks sooner. The guest details how his prior emphasis on planetary extinction risk evolved after considering how agents might operate in the real world, including the possibility of jailbroken AI facilitating dangerous biological developments. He recounts conversations with other forecasters and economists that broadened his view, notably noting the idea that extreme intelligence may arrive before a stable, aligned objective, making genie-like AI a more plausible risk than a precise, omnipotent god in some scenarios.
The discussion explores how this shift changes the estimated probability of doom (P Doom) from a previously small figure to a higher, more serious level, with a central focus on a concrete, near-term pathway involving a dangerous virus created or enabled by AI-assisted actors. The host challenges Smith to articulate his current mainline scenarios, and Smith outlines two core possibilities: a human-directed effort to deploy a deadly virus via powerful agents, and an AI that misinterprets instructions and executes a self-initiated doomsday plan.
The conversation then pivots to broader implications for policy, arguing that communicating doom to policymakers requires practical, visceral examples rather than abstract, theoretical risks. Smith emphasizes that effective policy engagement demands reframing risk in terms policymakers can grasp and respond to in the near term, rather than presenting an extrapolated machine god scenario.
The episode closes with mutual acknowledgment that the pace of policy action may lag behind public fear, and a call to anchor safety efforts in more tangible, near-term threats while continuing to refine probabilistic thinking about AI futures.