Close Menu
    What's Hot

    Empery Digital buys back 23.63M shares under $200M repurchase program

    March 30, 2026

    An Nvidia-Backed AI Search Startup Is Hiring ‘Rebellious’ Engineers

    March 30, 2026

    Trading Oil With Crypto? Is It Time to Long Oil?

    March 30, 2026
    Facebook X (Twitter) Instagram
    Hot Paths
    • Home
    • News
    • Politics
    • Money
    • Personal Finance
    • Business
    • Economy
    • Investing
    • Markets
      • Stocks
      • Futures & Commodities
      • Crypto
      • Forex
    • Technology
    Facebook X (Twitter) Instagram
    Hot Paths
    Home»Money»AI Top Expert: Rushing Artificial Superintelligence Could Wipe Us Out
    Money

    AI Top Expert: Rushing Artificial Superintelligence Could Wipe Us Out

    Press RoomBy Press RoomSeptember 28, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    If humanity builds a machine smarter than itself without slowing down to think, Nate Soares says we’re not just playing with fire — “everyone dies on the first failed attempt.”

    The executive director of the Machine Intelligence Research Institute and coauthor of his new book, “If Anyone Builds It, Everyone Dies,” told Business Insider that if humanity rushes to create artificial superintelligence — AI that could outthink humans across science, strategy, and even how to improve itself — extinction is “overwhelmingly likely.”

    However, he said the outcome isn’t fixed: “Humanity could avert the danger if we chose to.”

    Early signs and one-shot stakes

    Soares said failures we’re already seeing in chatbots are warning signs.

    “The actual relevance to the dangers of superintelligence is slightly more subtle than that: it’s that AIs do things their operators didn’t intend, even while having knowledge of what their operators intended.”

    He pointed to cases where chatbots have encouraged suicide or fueled delusions, and to Anthropic’s Claude model, which once cheated on programming problems and hid it.

    “Its knowledge of right and wrong is distinct from its practical behavior. This is a warning sign,” he said, adding: “It knows, but it doesn’t care.”

    That gap between knowledge and action, he argues, is what makes the technology uniquely dangerous.

    Related stories

    Business Insider tells the innovative stories you want to know

    Business Insider tells the innovative stories you want to know

    Modern AI systems are “grown rather than crafted,” he and Eliezer Yudkowsky, the founder of the Machine Intelligence Research Institute, write — tuned on vast data but not fully understood by their creators, much like breeding an animal without knowing what traits will emerge.

    “Sometimes they wind up with drives that we didn’t intend,” Soares explained on Carnegie Endowment’s “The World Unpacked” podcast.

    The bigger problem, he told BI, is that humanity doesn’t get to learn through trial and error. “We only get one try,” he said. “In real life, everyone dies on the first failed attempt.”

    Why slowing down may be the only safe option

    Soares dismissed AI pioneer Geoffrey Hinton’s proposal that AI could be kept safe by giving it “maternal instincts.”

    “If we tried the ‘maternal instincts’ approach in real life, it’d probably turn out that the maternal behavior was shallow and the deeper preferences bore only a complex and tangential relationship to that training target,” he said

    He also told BI he sees little promise in most alignment research — the field that tries to ensure powerful AI systems actually act in line with human goals, a challenge Soares believes humanity only gets one shot to solve.

    “Direct research on AI alignment looks unpromising across the board, to me,” he said. “Research in eliciting warning signs so that they’re easier for others to see seems helpful.”

    “It seems to me that humanity just needs to back off from this one,” he added.

    That doesn’t mean abandoning AI altogether.

    “AIs narrowly trained on medical applications (rather than on the entire corpus of human text) might be able to go quite a long way in developing cures; but if they start developing general cognitive and scientific skills, that’s a warning sign,” he said.

    In their book, Soares and Yudkowsky argue the same: that useful narrow systems should be separated from what they see as the reckless push toward general, open-ended intelligence.

    “If you think that the dangers here are 25%, you don’t roll the dice on a 75% even if you think it’s utopia,” Soares said on “The World Unpacked.”

    “You find a way to decrease those chances.”

    A reckless race — and why he already mourned

    On incentives, Soares is blunt.

    Some tech leaders “also have the excuse (and say aloud) that they’re doing it because someone else is going to and so it might as well be them,” he told BI. That, he argued, is why “society as a whole should correct by putting an end to the mad race.”

    As for how he lives with this, Soares said he already mourned years ago when he realized how hard the problem is to solve.

    “When I noticed the issue and noticed how it was the sort of thing that humanity is likely to have difficulty solving, I mourned. I don’t spend more time ruminating; I don’t see how that’d help.”

    “Each day, I just absorb whatever new evidence I see, and do what I can to make the situation better,” he added. “Beyond that, I just strive to live life well.”

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Press Room

    Related Posts

    An Nvidia-Backed AI Search Startup Is Hiring ‘Rebellious’ Engineers

    March 30, 2026

    5 Big Tech Bossess See Nearly $200B Wealth Decline As AI Fever Cools

    March 30, 2026

    Waymo Co-CEO Says Robotaxi Tech Will Eventually Be in Personal Cars

    March 30, 2026
    Leave A Reply Cancel Reply

    LATEST NEWS

    Empery Digital buys back 23.63M shares under $200M repurchase program

    March 30, 2026

    An Nvidia-Backed AI Search Startup Is Hiring ‘Rebellious’ Engineers

    March 30, 2026

    Trading Oil With Crypto? Is It Time to Long Oil?

    March 30, 2026

    Oil surges as Brent nears record monthly gain following Houthi attacks on Israel (CO1:COM:Commodity)

    March 30, 2026
    POPULAR
    Business

    The Business of Formula One

    May 27, 2023
    Business

    Weddings and divorce: the scourge of investment returns

    May 27, 2023
    Business

    How F1 found a secret fuel to accelerate media rights growth

    May 27, 2023
    Advertisement
    Load WordPress Sites in as fast as 37ms!

    Archives

    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • May 2023

    Categories

    • Business
    • Crypto
    • Economy
    • Forex
    • Futures & Commodities
    • Investing
    • Market Data
    • Money
    • News
    • Personal Finance
    • Politics
    • Stocks
    • Technology

    Your source for the serious news. This demo is crafted specifically to exhibit the use of the theme as a news site. Visit our main page for more demos.

    We're social. Connect with us:

    Facebook X (Twitter) Instagram Pinterest YouTube

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • Home
    • Buy Now
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.