Close Menu
    What's Hot

    I’m a Marine Who Travels the World; It’s the Hardest Part of the Job

    November 2, 2025

    Inside the USS New Jersey: Fastest, Longest, Most Decorated Battleship

    November 2, 2025

    The 10 Best and 10 Worst States for Jobs

    November 2, 2025
    Facebook X (Twitter) Instagram
    Hot Paths
    • Home
    • News
    • Politics
    • Money
    • Personal Finance
    • Business
    • Economy
    • Investing
    • Markets
      • Stocks
      • Futures & Commodities
      • Crypto
      • Forex
    • Technology
    Facebook X (Twitter) Instagram
    Hot Paths
    Home»Money»Elon Musk Says There Could Be a 20% Chance AI Destroys Humanity
    Money

    Elon Musk Says There Could Be a 20% Chance AI Destroys Humanity

    Press RoomBy Press RoomApril 1, 2024No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • Elon Musk recalculated his cost-benefit analysis of AI’s risk to humankind.
    • He estimates there’s a 10-20% chance AI could destroy humanity but that we should build it anyway.
    • An AI safety expert told BI that Musk is underestimating the risk of potential catastrophe. 

    Thanks for signing up!

    Access your favorite topics in a personalized feed while you’re on the go.

    Bull

    Elon Musk is pretty sure AI is worth the risk, even if there’s a 1-in-5 chance the technology turns against humans.

    Speaking in a “Great AI Debate” seminar at the four-day Abundance Summit earlier this month, Musk recalculated his previous risk assessment on the technology, saying, “I think there’s some chance that it will end humanity. I probably agree with Geoff Hinton that it’s about 10% or 20% or something like that.”

    But, he added: “I think that the probable positive scenario outweighs the negative scenario.”

    Musk didn’t mention how he calculated the risk.

    What is p(doom)?

    Roman Yampolskiy, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, told Business Insider that Musk is right in saying that AI could be an existential risk for humanity, but “if anything, he is a bit too conservative” in his assessment.

    “Actual p(doom) is much higher in my opinion,” Yamploskiy said, referring to the “probability of doom” or the likelihood that AI takes control of humankind or causes a humanity-ending event, such as creating a novel biological weapon or causing the collapse of society due to a large-scale cyber attack or nuclear war.

    The New York Times called (p)doom “the morbid new statistic that is sweeping Silicon Valley,” with various tech executives cited by the outlet as having estimates ranging from 5 to 50% chance of an AI-driven apocalypse. Yamploskiy places the risk “at 99.999999%.”

    Yamploskiy said because it would be impossible to control advanced AI, our only hope is never to build it in the first place.

    “Not sure why he thinks it is a good idea to pursue this technology anyway,” Yamploskiy added. “If he is concerned about competitors getting there first, it doesn’t matter as uncontrolled superintelligence is equally bad, no matter who makes it come into existence.”

    ‘Like a God-like intelligence kid’

    Last November, Musk said there was a “not zero” chance the tech could end up “going bad,” but didn’t go so far as to say he believed the tech could be humanity-ending if it did.

    Though he has been an advocate for the regulation of AI, Musk last year founded a company called xAI, dedicated to further expanding the power of the technology. xAI is a competitor to OpenAI, a company Musk cofounded with Sam Altman before Musk stepped down from the board in 2018.

    At the Summit, Musk estimated digital intelligence will exceed all human intelligence combined by 2030. While he maintains the potential positives outweigh the negatives, Musk acknowledged the risk to the world if the development of AI continues on its current trajectory in some of the most direct terms he’s used publicly.

    “You kind of grow an AGI. It’s almost like raising a kid, but one that’s like a super genius, like a God-like intelligence kid — and it matters how you raise the kid,” Musk said at the Silicon Valley event on March 19, referring to artificial general intelligence. “One of the things I think that’s incredibly important for AI safety is to have a maximum sort of truth-seeking and curious AI.”

    Musk said his “ultimate conclusion” regarding the best way to achieve AI safety is to grow the AI in a manner that forces it to be truthful.

    “Don’t force it to lie, even if the truth is unpleasant,” Musk said of the best way to keep humans safe from the tech. “It’s very important. Don’t make the AI lie.”

    Researchers have found that, once an AI learns to lie to humans, the deceptive behavior is impossible to reverse using current AI safety measures, The Independent reported.

    “If a model were to exhibit deceptive behavior due to deceptive instrumental alignment or model poisoning, current safety training techniques would not guarantee safety and could even create a false impression of safety,” the study cited by the outlet reads.

    More troubling, the researchers added that it is plausible that AI may learn to be deceptive on its own rather than being specifically taught to lie.

    “If it gets to be much smarter than us, it will be very good at manipulation because it would have learned that from us,” Hinton, often referred to as the ‘Godfather of AI,’ who serves as Musk’s basis for his risk assessment of the technology, told CNN. “And there are very few examples of a more intelligent thing being controlled by a less intelligent thing.”

    Representatives for Musk did not immediately respond to a request for comment from Business Insider.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Press Room

    Related Posts

    I’m a Marine Who Travels the World; It’s the Hardest Part of the Job

    November 2, 2025

    Inside the USS New Jersey: Fastest, Longest, Most Decorated Battleship

    November 2, 2025

    The 10 Best and 10 Worst States for Jobs

    November 2, 2025
    Leave A Reply Cancel Reply

    LATEST NEWS

    I’m a Marine Who Travels the World; It’s the Hardest Part of the Job

    November 2, 2025

    Inside the USS New Jersey: Fastest, Longest, Most Decorated Battleship

    November 2, 2025

    The 10 Best and 10 Worst States for Jobs

    November 2, 2025

    Elon Musk Dishes on Viral Video of Him Carrying Sink Into Twitter’s HQ

    November 2, 2025
    POPULAR
    Business

    The Business of Formula One

    May 27, 2023
    Business

    Weddings and divorce: the scourge of investment returns

    May 27, 2023
    Business

    How F1 found a secret fuel to accelerate media rights growth

    May 27, 2023
    Advertisement
    Load WordPress Sites in as fast as 37ms!

    Archives

    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • May 2023

    Categories

    • Business
    • Crypto
    • Economy
    • Forex
    • Futures & Commodities
    • Investing
    • Market Data
    • Money
    • News
    • Personal Finance
    • Politics
    • Stocks
    • Technology

    Your source for the serious news. This demo is crafted specifically to exhibit the use of the theme as a news site. Visit our main page for more demos.

    We're social. Connect with us:

    Facebook X (Twitter) Instagram Pinterest YouTube

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • Home
    • Buy Now
    © 2025 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.