The meteoric rise of artificial intelligence may appear unstoppable — but it’s facing a shortage of training data.
“We’ve already run out of data,” Neema Raphael, Goldman Sachs’ chief data officer and head of data engineering, said on the bank’s “Exchanges” podcast published on Tuesday.
Raphael said that this shortage may already be influencing how new AI systems are built.
He pointed to China’s DeepSeek as an example, saying one hypothesis for its purported development costs came from training on the outputs of existing models rather than entirely new data.
“I think the real interesting thing is going to be how previous models then shape what the next iteration of the world is going to look like in this way,” Raphael said.
With the web tapped out, developers are turning to synthetic data — machine-generated text, images, and code. That approach offers limitless supply, but also risks overwhelming models with low-quality output or AI slop.
However, Raphael said he doesn’t think the lack of fresh data will be a massive constraint, in part because companies are sitting on untapped reserves of information.
“I think from a consumer world model, I think it’s interesting we’ve definitely in the synthetic sort of explosion of data. But from an enterprise perspective, I think there’s still a lot of juice I’d say to be squeezed in that,” he said.
That means the real frontier may not be the open internet, but the proprietary datasets held by corporations. From trading flows to client interactions, firms like Goldman sit on information that could make AI tools far more valuable if harnessed correctly.
Raphael’s comments come as the industry grapples with “peak data” since the breakout of ChatGPT three years ago.
In January, OpenAI cofounder Ilya Sutskever said at a conference that all the useful data online had already been used to train models, warning that AI’s era of rapid development “will unquestionably end.”
The next frontier: proprietary data
For businesses, Raphael stressed, the obstacle isn’t just finding more data — it’s ensuring that the data is usable.
“The challenge is understanding the data, understanding the business context of the data, and then being able to normalize it in a way that makes sense for the business to consume it,” he said.
Still, Raphael suggested that heavy reliance on synthetic data raises a deeper question about AI’s trajectory. “I think what might be interesting is people might think there might be a creative plateau,” he said.
He wondered what would happen if models keep training only on machine-generated content.
“If all of the data is synthetically generated, then how much human data could then be incorporated?” he said.
“I think that’ll be an interesting thing to watch from a philosophical perspective,” he added.