CONTACT US

Contact Form

    News Details

    ‘Everyone Dies’: Why two top scientists are AI doomers
    • March 28, 2026

    I don’t read a lot of nonfiction, hardcover, current-bestseller-list books.

    Hey, for an old English major, I don’t read that many book-books at all anymore. First off, the three print newspapers that arrive on my driveway 365 mornings of the year are time bandits of their own when it comes to hours available for reading in a day. Then there are the magazines that fill the mailbox, intellectual or otherwise.

    But I’ve always got a novel working, often enough an audiobook for my running trail in the morning and for chores in the house and the garden.

    Book-books are for right before bed, and bed is for falling asleep, and for my reading glasses to fall off my nose when I nod off after barely getting through 10 pages of a tome.

    But I just couldn’t resist the siren lure of a title like this, which I heard mentioned on the radio: “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.”

    So, dear reader, I bought a copy, and I read it.

    Book report: “War and Peace,” or, I don’t know, “Silent Spring,” it is not. Authors Eliezer Yudkowsky and Nate Soares are deep-thinking computer scientists and machine intelligence researchers and they know a lot, and some of it, including the title, can scare the pants off of you, but this is a weird and even goofy effort, which, even at a mere 233 pages of real prose, is padded out with forays into oddball skits, dialogues between imaginary characters Soberskeptic and Oldhand.

    What you read it for, and Lord knows the subject is serious as a heart attack, and certainly worth contemplating, is the insight two guys who have been working for decades with the smarter-than-humans, amorphous … thing … that is artificial intelligence have. Because the singularity — artificial intelligence surpassing human intelligence, and taking over — is rather worth avoiding. And these two guys, who were among the hundreds of scientists who signed the 2023 statement “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” have a footnote about that letter: “we considered it a severe understatement.”

    Because the authors, who know way more than we do about the risks, say they really mean “everyone dies”: “We do not mean that as hyperbole. We are not exaggerating for effect.” In AI world, there are the boomers — “Robots will do all the work, serving us mai tais poolside!” — and the doomers — “Bang bang, meat puppets.”

    They lean toward the doom.

    You read this book for the bangers. And there are plenty of them.

    “Once AIs get sufficiently smart, they’ll start acting like they have preferences — like they want things.”

    “Wouldn’t humans be useful to a superintelligence, even if that AI didn’t want to be nice for the sake of niceness? Not once the AI reached a high technology level. There was a point where humans were dependent on horses … When we developed technological substitutes for horses, we stopped keeping horses.”

    Q: “But wouldn’t the AI keep us around as pets?” A:  “Humans keep dogs as pets … but not wolves.”

    For AI, “Humanity is an inconvenience to you. For example, if you allow humans to run around unchecked, they could set off their nuclear bombs.”

    If the authors are doomers rather than boomers about AI, they remain odd optimists. “If everyone woke up one morning believing only a quarter of what we believe,” they write, “and everyone knew everyone else believed it, they’d walk out into the street and shut down the datacenters … Can Earth survive if only some people do their part? Perhaps; perhaps not.”

    Fine with me to give the robots the finger rather than the therapy some have prescribed. Are you in?

    Larry Wilson is on the Southern California News Group editorial board. lwilson@scng.com.

    ​ Orange County Register 

    News