
Advertisement
Telegram’s Cocoon AI network is getting ready for prime time with a massive deployment of graphics processing units courtesy of one of the Ton blockchain ecosystem’s leading investors, AlphaTON Capital.
AlphaTON is working with SingularityNET, CUDO Compute and Vertical Data to deploy a huge fleet of high-performance GPUs on the network, housed in a sustainably-powered data centre in Sweden that runs on hydroelectric power. The intention is to provide the massive amounts of computing power needed by Cocoon to support Telegram’s vision of a decentralised AI ecosystem.
Cocoon, which stands for “Confidential Compute Open Network”, is a decentralised AI infrastructure network built on the Ton blockchain that allows anyone to contribute computing resources. Those who make their GPUs and other AI accelerators available to the network can earn $TON cryptocurrency for renting their hardware out to the network’s customers.
With Cocoon AI, users benefit from being able to maintain control of any data they provide to AI systems, in-line with the privacy-preserving ethos of the Ton blockchain. The project is the brainchild of Telegram founder and CEO Pavel Durov, and represents an effort to make AI more open and beneficial to the masses. Instead of paying to access AI services like ChatGPT in both money and data, users can access them for free, and potentially earn $TON by monetising their personal data or contributing to the network.
Cocoon AI targets developers looking for access to low-cost, private infrastructure to host their applications. It’s extremely ambitious, and though some might think it has little chance of overthrowing heavyweights like OpenAI and Google, it has an ace up its sleeve in the shape of Telegram itself. The messenger app, which boasts more than 1 billion users globally, plans to use Cocoon AI to power its own AI services, and will also encourage developers in its Mini App ecosystem to do the same.
This explains the interest of AlphaTON. By supporting the network, it’s helping to increase adoption of AI and grow the broader Ton blockchain ecosystem, which it is already heavily invested in. It can also earn a ton of $TON in revenue from the GPUs it deploys.
AlphaTON said its GPU fleet is being funded through Vertical Data’s GPUFinancing subsidiary, which offers structured financing for large-scale deployments of decentralised AI hardware. SingularityNET, creator of a decentralised platform for building and monetising AI services, and CUDO Compute, are supporting the deployment with their specialised expertise in AI infrastructure.
The deployment represents a key milestone in the convergence of ethical AI, data privacy and environmental responsibility, said Janet Adams of the SingularityNET Foundation. By hosting them in a hydroelectric-powered data centre, they’ll be able to minimise the carbon footprint of AI training and inference workloads. “Decentralised AI requires decentralised infrastructure,” she said, adding that privacy and sustainability represent a significant “competitive advantage” in the AI industry.
AlphaTON CEO Brittany Kaiser said there’s a growing recognition about the need for AI infrastructure that prioritises user sovereignty, environmental responsibility and decentralised governance. She said there are valid concerns about AI services continually slurping user’s data, the out-of-control energy consumption of AI server frams, and the level of centralised control in the industry.
“The partnership represents the future of AI infrastructure, where privacy, sustainability and decentralisation aren’t competing priorities, but foundational principles,” Kaiser said. “The AI revolution demands massive computational resources, but it doesn’t have to come at the expense of our planet or our privacy.”
AlphaTON and its partners are planning to get started on their fleet of GPUs right away, and plan to continue scaling it throughout 2026 and very possibly beyond that, should Cocoon AI help decentralised AI to take off.
Image source: Unsplash

Google has rolled out Private AI Compute, a new cloud-based processing system designed to bring the privacy of on-device AI to the cloud. The platform aims to give users faster, more capable AI experiences without compromising data security. It combines Google’s most advanced Gemini models with strict privacy safeguards, reflecting the company’s ongoing effort to make AI both powerful and responsible.
Advertisement

If you’ve ever thought companies talk more than act when it comes to their AI strategy, a new Cisco report backs you up. It turns out that just 13 percent globally are actually prepared for the AI revolution.

For all the progress in artificial intelligence, most video security systems still fail at recognising context in real-world conditions. The majority of cameras can capture real-time footage, but struggle to interpret it. This is a problem turning into a growing concern for smart city designers, manufacturers and schools, each of which may depend on AI to keep people and property safe.

Adopting AI at scale can be difficult. Enterprises around the world are discovering the pace of AI deployment is frustratingly slow as they face implementation, integration, and customisation challenges. Generative AI is undoubtedly powerful, but it can be complex, particularly for businesses starting from scratch.

The AI adoption in China has reached unprecedented levels, with the country’s generative artificial intelligence user base doubling to 515 million in just six months, according to a report released by the China Internet Network Information Centre (CNNIC).