Site icon Hot Paths

The AI Prediction Sam Altman Says He Didn’t Get Quite Right

Sam Altman says he correctly predicted how AI would develop. There is, however, one prediction he didn’t quite get right.

“I feel like we’ve been very right on the technical predictions, and then I somehow thought society would feel more different if we actually delivered on them than it does so far,” Altman, the CEO of OpenAI, said on a recent episode of “Uncapped with Jack Altman.” “But I don’t even — it’s not even obvious that that’s a bad thing.”

Altman believes that OpenAI has “cracked” reasoning in its models, and said its o3 LLM in particular is on par with a human being with a “Ph.D.” in many subject matters. Though the trajectory of the technology has proceeded largely as expected, Altman said people aren’t quite reacting as proportionately as he anticipated.

“The models can now do the kind of reasoning in a particular domain you’d expect a Ph.D. in that field to be able to do,” he said. “In some sense we’re like, ‘Oh okay, the AIs are like a top competitive programmer in the world now,’ or ‘AIs can get like a top score on the world’s hardest math competitions,’ or ‘AIs can like, you know, do problems that I’d expect an expert Ph.D. in my field to do,’ and we’re like not that impressed. It’s crazy.”

While AI use is on the rise, society hasn’t transformed in huge leaps and bounds yet. AI is impacting businesses already, with many companies adopting AI tools and, in some cases, using them to augment or replace human labor.

Altman believes the response to the technology has been relatively underwhelming when held up to what he sees as its future potential.

“If I told you in 2020, ‘We’re going to make something like ChatGPT and it’s going to be as smart as a Ph.D. student in most areas, and we’re going to deploy it, and a significant fraction of the world is going to use it and kind of use it a lot,'” he said. “Maybe you would have believed that, maybe you wouldn’t have.”

“But conditioned on that, I bet you would say ‘Okay, if that happens, the world looks way more different than it does right now,'” he added.

Altman acknowledges that AI is currently most useful as a sort of “co-pilot,” but foresees major change if it’s ever able to act autonomously — especially considering its potential applications in science.

“You already hear scientists who say they’re faster with AI,” he said. “Like, we don’t have AI maybe autonomously doing science, but if a human scientist is three times as productive using o3, that’s still a pretty big deal. And then, as that keeps going and the AI can like autonomously do some science, figure out novel physics … “

In terms of risk, Altman said he isn’t too concerned, despite other AI leaders — like Anthropic’s Dario Amodei and DeepMind’s Demis Hassabis — saying that they worry about potential catastrophic scenarios in the future.

“I don’t know about way riskier. I think like, the ability to make a bioweapon or like, take down a country’s whole grid — you can do quite damaging things without physical stuff,” he said. “It gets riskier in like sillier ways. Like, I would be afraid to have a humanoid robot walking around my house that might fall on my baby, unless I like really, really trusted it.”

OpenAI did not immediately respond to a request for comment by Business Insider.

For now, Altman said, life remains relatively constant. But if things begin to snowball — and he believes that they will — he has no concrete idea what the world may end up looking like.

“I think we will get to extremely smart and capable models — capable of discovering important new ideas, capable of automating huge amounts of work,” he said. “But then I feel totally confused about what society looks like if that happens. So I’m like most interested in the capabilities questions, but I feel like maybe at this point more people should be talking about, like, how do we make sure society gets the value out of this?”

Exit mobile version