
By Gaurav Sharma, CEO of io.net
Vitalik Buterin recently declared 2026 the year to “take back lost ground in computing self-sovereignty.” He shared the changes he’s made personally: replacing Google Docs with Fileverse, Gmail with Proton Mail, Telegram with Signal, and experimenting with running large language models locally on his own laptop rather than through cloud services.
The instinct is sound. Centralised AI infrastructure is a genuine problem. Three companies – Amazon, Microsoft and Google – now control 66% of global cloud infrastructure spending, a market that reached $102.6 billion in a single quarter last year. When every prompt flows through this concentrated infrastructure, users surrender control over data that should remain private. For anyone who cares about digital autonomy, this should feel like a structural failure. But Vitalik’s proposed solution – hosting AI locally on personal hardware – accepts a tradeoff that doesn’t need to exist. For anyone trying to build serious AI applications, his framework offers no real path forward.
The ceiling on local compute
Running AI on your own device has obvious appeal. If the model never leaves your laptop, neither does your data. No third parties, no surveillance, no dependence on corporate infrastructure. This works for lightweight use cases. An individual running basic inference or a developer experimenting with a small model can create value with locally-hosted models. Vitalik acknowledges the current limitations around usability and efficiency, but frames them as temporary friction that will smooth out over time.
However, training models, running inference at scale and deploying agents that operate continuously demand GPU power that personal hardware cannot deliver. Even a single AI agent running overnight needs persistent compute. The promise of always-on AI assistants falls apart the moment you step away from your desk. Enterprise deployments require thousands of GPU-hours per day. A startup training a specialised model could burn through more compute in a week than a high-end laptop provides in a year. An ambitious research team might spend 80% or more of its funding just on GPU capacity – resources that could otherwise go to talent, R&D or market expansion. Well-capitalised giants absorb these costs easily while everyone else is priced out.
However, training models, running inference at scale and deploying agents that operate continuously demand GPU power that personal hardware cannot deliver. Even a single AI agent running overnight needs persistent compute. The promise of always-on AI assistants falls apart the moment you step away from your desk. Enterprise deployments require thousands of GPU-hours per day. A startup training a specialised model could burn through more compute in a week than a high-end laptop provides in a year. An ambitious research team might spend 80% or more of its funding just on GPU capacity – resources that could otherwise go to talent, R&D or market expansion. Well-capitalised giants absorb these costs easily while everyone else is priced out.
Local hosting doesn’t solve this, and implicitly accepts a binary that leaves most builders with nowhere to go: stay small and sovereign, or scale up and hand your data to Amazon, Google or Microsoft.
A false binary
The crypto community should be well-placed to recognise this framing for what it is. Decentralisation was never intended to shrink capability to preserve independence; it’s about enabling scale and sovereignty to coexist. The same principle applies to compute.
Across the world, millions of GPUs sit underutilised in data centres, enterprises, universities, and independent facilities. Today’s most advanced decentralised compute networks aggregate this fragmented hardware into elastic, programmable infrastructure. These networks now span over 130 countries, offering enterprise-grade GPUs and specialised edge devices at costs up to 70% lower than traditional hyperscalers.
Developers can access high-performance clusters on demand, drawn from a distributed pool of independent operators rather than a single provider. Pricing follows usage and competition in real time, not contracts negotiated years in advance. For suppliers, idle hardware can be transformed into productive capacity.
Who benefits from open compute markets
The impact extends well beyond cost savings. For the broader market, it represents a genuine alternative to the oligopoly that currently controls AI. Independent research groups can run meaningful experiments rather than scaling down ambitions to fit hardware constraints. Startups in emerging economies can build models for local languages, regional healthcare systems, or agricultural applications without raising the capital to secure hyperscaler contracts.
Regional data centres can participate in a global market instead of being locked out by the structure of existing deals. This is how we actually close the AI digital divide: not by asking developers to accept less powerful tools, but by reorganising how compute reaches the market. Vitalik is right that we should resist the centralisation of AI infrastructure, but the answer isn’t retreating to local hardware. Distributed systems that deliver both scale and independence already exist.
The real test of crypto’s principles
The crypto community enshrined decentralisation as a founding principle. Decentralised compute networks represent a chance to do what crypto has always claimed it could: prove that distributed systems can match and exceed centralised alternatives. Lower costs, broader access, no single point of control or failure. The infrastructure already exists; the question is whether the industry will use it, or settle for a version of sovereignty that only works if you’re willing to stay small.