The future of artificial intelligence is not just about raw computational power—it’s about trust, privacy, and accessibility. In a groundbreaking move, Phala Network and io.net have announced a strategic partnership aimed at revolutionizing secure, decentralized AI through the integration of GPU-based Trusted Execution Environments (GPU-TEE).
This collaboration merges Phala’s leadership in confidential computing with io.net’s scalable, cost-efficient decentralized GPU infrastructure, setting a new benchmark for privacy-preserving, high-performance AI workloads in web3.
Advancing Secure Computation with TEE Technology
Since its mainnet launch in 2021, Phala Network has established one of the largest decentralized networks of Trusted Execution Environment (TEE) CPU nodes—boasting over 30,000 active nodes. These nodes enable developers to offload complex computations from blockchain smart contracts to Phala’s secure off-chain environment, all while maintaining full data confidentiality and verifiability.
By leveraging hardware-level security features like Intel SGX, Phala ensures that sensitive data remains encrypted during processing. This capability is critical for next-generation web3 applications, including privacy-first social platforms, decentralized identity systems, and autonomous AI agents.
Now, with the expansion into GPU-TEE, Phala is pushing the boundaries of what’s possible in secure AI computation.
👉 Discover how GPU-TEE is transforming decentralized AI infrastructure.
Introducing the First-Ever GPU-TEE Benchmark
Phala has recently released a comprehensive benchmark study evaluating the performance of NVIDIA’s H100 and H200 GPUs when integrated with TEE technology. This milestone marks the first public assessment of GPU-accelerated confidential computing in a decentralized context.
The benchmark demonstrates that these cutting-edge GPUs can efficiently run large language models such as LLaMA 3 and Microsoft Phi, while preserving end-to-end data privacy through encrypted memory and secure enclaves. This opens the door to training and deploying AI models on decentralized networks without compromising security—an essential requirement for enterprise-grade and privacy-sensitive applications.
With this advancement, Phala is not only enhancing computational throughput but also ensuring that every operation within the TEE can be cryptographically verified, aligning perfectly with the principles of transparency and trustlessness in web3.
Powering Decentralized AI with io.net’s GPU Cloud
Enter io.net—a decentralized compute network that aggregates underutilized GPU resources from data centers and enterprises worldwide. By transforming these distributed GPUs into a unified, cloud-like infrastructure, io.net enables machine learning engineers and developers to deploy GPU clusters on demand—often at up to 90% lower cost than traditional cloud providers like AWS or Google Cloud.
Through this partnership, Phala gains access to io.net’s vast pool of NVIDIA H100 and H200 Tensor Core GPUs, which are purpose-built for AI workloads and include native confidential computing capabilities such as:
- Encrypted memory
- Secure boot
- Hardware-isolated execution environments
These features align seamlessly with Phala’s TEE architecture, creating a powerful synergy where both security and scalability are prioritized.
Developers can now leverage the IO SDK to seamlessly integrate their Python-based ML applications—built on the widely adopted RAY framework (used by OpenAI)—into this secure, decentralized ecosystem. The result? A global, low-latency network for machine learning that functions like a Content Delivery Network (CDN) for AI, bringing computation closer to users and reducing inference times.
Democratizing Access to High-Performance AI
One of the most transformative aspects of this collaboration is its potential to democratize access to advanced AI infrastructure. Historically, training large models has been limited to well-funded tech giants due to the high costs of acquiring and maintaining GPU clusters.
But with io.net’s pay-as-you-go model and Phala’s secure computation layer, individual developers, startups, and research teams can now:
- Launch scalable GPU clusters in seconds
- Run confidential AI workloads without exposing raw data
- Pay significantly less than centralized alternatives
Future enhancements planned for the IO platform—including the IO Models Store, serverless inference, cloud gaming, and pixel streaming—will further expand use cases beyond AI into immersive web3 experiences.
Research, Testing, and Future Integration Roadmap
The partnership goes beyond infrastructure sharing. Phala and io.net will jointly conduct research, testing, and benchmarking of GPU-TEE performance using state-of-the-art hardware. Initial efforts will focus on:
- Deploying autonomous AI agents on the IO Network
- Running AI agent contracts securely within TEEs
- Integrating Phala’s TEE workers directly into io.net’s GPU clusters
These technical integrations aim to create a unified stack where AI models can be trained, executed, and verified in a fully decentralized and privacy-preserving manner.
Long-term, both teams envision deeper protocol-level interoperability—potentially enabling cross-chain AI services, verifiable model provenance, and incentive mechanisms powered by tokenized compute contributions.
👉 Explore how decentralized GPU networks are reshaping the AI economy.
Frequently Asked Questions (FAQ)
What is GPU-TEE and why does it matter?
GPU-TEE refers to extending Trusted Execution Environments to Graphics Processing Units. It allows sensitive AI computations—like model training or inference—to occur in a hardware-isolated, encrypted environment. This ensures data privacy even if the underlying system is compromised, making it vital for secure decentralized AI.
How does this partnership benefit developers?
Developers gain access to high-performance GPUs at a fraction of traditional costs, combined with strong security guarantees. They can build privacy-preserving AI applications without managing physical hardware or sacrificing decentralization.
Can I run LLMs like LLaMA 3 on this network?
Yes. The benchmarked NVIDIA H100 and H200 GPUs are capable of efficiently running large language models such as LLaMA 3 and Microsoft Phi. When paired with Phala’s TEE technology, these models can be executed securely and verifiably.
Is this solution only for AI applications?
While AI is a primary focus, the infrastructure supports any compute-intensive workload requiring privacy—such as financial modeling, healthcare analytics, or encrypted data processing.
How does io.net achieve up to 90% cost savings?
io.net aggregates idle GPU capacity from various providers globally, eliminating overhead from centralized cloud infrastructures. This peer-to-peer resource model drastically reduces costs while maintaining performance.
Will this affect blockchain scalability?
Not directly. However, by offloading heavy computation off-chain via secure TEEs, this solution indirectly improves blockchain efficiency—freeing smart contracts from resource-intensive tasks.
Core Keywords
- Decentralized AI
- GPU-TEE
- Trusted Execution Environment
- Confidential Computing
- Secure Computation
- AI Agents
- Web3 Infrastructure
- Machine Learning on Blockchain
👉 See how secure decentralized compute is powering the next wave of AI innovation.
Conclusion
The strategic alliance between Phala Network and io.net represents a pivotal step toward a more open, secure, and accessible AI ecosystem. By combining decentralized GPU power with hardware-enforced privacy through TEEs, they are laying the foundation for a new generation of verifiable, trustless AI applications in web3.
As demand for private, scalable AI grows—from autonomous agents to on-chain analytics—this partnership positions itself at the forefront of innovation. For developers and organizations alike, the message is clear: the future of AI isn’t just intelligent—it’s confidential, decentralized, and within reach.