
Advertisement
With AI workloads becoming more computationally demanding, organisations across the globe are fast realising that traditional centralised providers aren’t always the answer to their burgeoning needs.
And while compute giants (like AWS, Google Cloud, and Azure) continue to capture the limelight when it comes to AI processing, a quieter revolution has been brewing. Enter the decentralised compute marketplace, consisting of platforms capable of connecting organisations needing GPU power with providers who have hardware to spare, via decentralised mechanisms.
These marketplaces are capable of handling real workloads like AI model training and 3D rendering at costs that traditional cloud providers can’t compete with.
In this article, we will discuss five platforms worth watching as they strive to reshape how computing resources are allocated.

Building a liquid marketplace for computing is harder than it sounds, yet Argentum AI has tackled this challenge successfully by treating GPU resources the way financial markets treat stocks. That is, as tradable commodities with transparent pricing and real-time settlement.
The platform operates as an independent, decentralised marketplace where enterprises can post computing tasks and providers bid to execute them. However, what truly sets the platform apart is its underlying infrastructure. Argentum uses real-time bidding, verifiable execution, and blockchain-based transparent settlement to unlock idle computing capacity.
Settlement happens on-chain, with Ethereum smart contracts holding funds in escrow until jobs complete successfully. The power of this model was put on full display when, earlier in October, Argentum closed an oversubscribed pre-seed funding round led by Kraken, Banyan Ventures, Victor Morganstern, and Todd Bensen.
The platform plans to work in conjunction with GPU manufacturers to establish liquidity and monetisation plans for second-life assets, reducing its compute costs even further.

Scale matters in the infrastructure realm, and Aethir has achieved the feat quickly. Within a year (following its Token Generation Event), the project has established itself as one of the largest decentralised GPU clouds in today’s Web3 economy.
The platform sources GPU capacity from tier 3 and tier 4 data centres, making them available through a distributed network of 3,000+ NVIDIA H100s and H200s, plus 62,000+ Aethir Edge cloud computing devices.To allay any pricing volatility-related concerns (especially for those looking to pay the firm in cryptocurrency), Aethir partnered recently with Maitrix to introduce AUSD, an algorithmic stablecoin pegged to the US dollar.

If decentralised compute marketplaces are disruptive, Bittensor takes it to the next level by turning AI itself into a marketplace. It does so by operating as an L1, where developers can train AI models and contribute machine intelligence in lieu of the network’s native TAO token.
Earlier this year, the Bittensor team introduced Dynamic TAO (dTAO), an upgrade that allows each subnet to issue its own Alpha Token and compete for TAO rewards through open market mechanisms (indirectly creating a competitive environment where the best AI models and subnets naturally attract more resources).

Akash takes a straightforward approach to decentralised cloud operations, in that it matches idle computing resources with flexible demand through an open marketplace. The platform allows users to rent computing resources from a global network of providers, with costs up to 80% lower than traditional cloud services’.
The network runs on Cosmos SDK and uses a Delegated Proof-of-Stake consensus mechanism where users can specify their exact requirements (like CPU, memory, storage, geographic location) and providers can bid for these requests.
In August 2025, Akash partnered with NVIDIA to deploy Blackwell B200/B300 GPUs on its decentralised cloud, targeting AI developers needing high-performance training and inference.

Flux combines the power of the blockchain with cloud computing through a unique Proof-of-Useful-Work v2 model, which replaces GPU mining with a node-centric system where FluxNodes running real workloads secure the network (reducing emissions by 10% annually and targeting sub-1% inflation).
The platform encompasses FluxOS (a Linux-based OS for deploying decentralised apps), FluxEdge (a GPU rental platform for AI/ML workloads), and Zelcore (a multi-chain wallet supporting 85+ blockchains).

Google has rolled out Private AI Compute, a new cloud-based processing system designed to bring the privacy of on-device AI to the cloud. The platform aims to give users faster, more capable AI experiences without compromising data security. It combines Google’s most advanced Gemini models with strict privacy safeguards, reflecting the company’s ongoing effort to make AI both powerful and responsible.
Advertisement

If you’ve ever thought companies talk more than act when it comes to their AI strategy, a new Cisco report backs you up. It turns out that just 13 percent globally are actually prepared for the AI revolution.

For all the progress in artificial intelligence, most video security systems still fail at recognising context in real-world conditions. The majority of cameras can capture real-time footage, but struggle to interpret it. This is a problem turning into a growing concern for smart city designers, manufacturers and schools, each of which may depend on AI to keep people and property safe.

Adopting AI at scale can be difficult. Enterprises around the world are discovering the pace of AI deployment is frustratingly slow as they face implementation, integration, and customisation challenges. Generative AI is undoubtedly powerful, but it can be complex, particularly for businesses starting from scratch.

The AI adoption in China has reached unprecedented levels, with the country’s generative artificial intelligence user base doubling to 515 million in just six months, according to a report released by the China Internet Network Information Centre (CNNIC).