
About this Event
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? Join Nate Soares, author of the new book "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would K*ll Us All", for a walk through the theory and the evidence, as he presents one possible extinction scenario, and explains what it would take for humanity to survive.
Event Venue & Nearby Stays
Manny's, 3092 16th Street, San Francisco, United States
USD 9.27 to USD 37.08