📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
The Integration of AI and DePIN: The Rise and Challenges of Decentralized GPU Computing Networks
The Integration of AI and DePIN: Exploring Decentralized GPU Computing Networks
Since 2023, AI and DePIN have gained significant attention in the Web3 space, with market capitalizations of 30 billion USD and 23 billion USD, respectively. This article aims to explore the intersection of AI and DePIN, studying the development of related protocols.
In the AI technology stack, the DePIN network empowers AI by providing computing resources. The demand for GPUs by large tech companies has led to shortages, making it difficult for other developers to obtain enough GPUs for computation. This often forces developers to choose centralized cloud services, but long-term high-performance hardware contracts often lack flexibility and are inefficient.
DePIN offers a more flexible and cost-effective alternative by incentivizing resource contributions through tokens. In the AI field, DePIN integrates GPU resources from individual owners and data centers, providing a unified supply for users needing hardware. These networks not only provide developers with customized and on-demand access but also create additional income for GPU owners.
There are various AI DePIN networks in the market, each with its own features. Below, we will explore the characteristics and goals of several major projects.
AI DePIN Network Overview
Render
Render is a pioneer in the P2P network providing GPU computing power, initially focused on graphic rendering for content creation, and later expanded its scope to AI computing tasks.
Features:
Akash
Akash is positioned as a "super cloud" platform that supports storage, GPU, and CPU computing, serving as an alternative to traditional cloud services.
Features:
io.net
io.net provides access to distributed GPU cloud clusters, specifically designed for AI and ML use cases.
Features:
Gensyn
Gensyn provides GPU computing power focused on machine learning and deep learning computations.
Features:
Aethir
Aethir specializes in providing enterprise-level GPUs, mainly targeting compute-intensive fields such as AI, ML, and cloud gaming.
Features:
Phala Network
Phala Network, as the execution layer of Web3 AI solutions, addresses privacy issues through a Trusted Execution Environment (TEE).
Features:
Project Comparison
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|--------|-------|--------|--------|--------|-------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphic Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | AI, Cloud Gaming and Telecommunications | On-chain AI Execution | | AI Task Type | Inference | Bidirectional | Bidirectional | Training | Training | Execution | | Work Pricing | Performance-Based | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption & Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fee | 0.5-5% per job | 20% USDC, 4% AKT | 2% USDC, 0.25% reserve | Low Fee | 20% per session | Proportional to the staked amount | | Security | Rendering Proof | Proof of Stake | Proof of Computation | Proof of Stake | Rendering Capability Proof | Inherited from Relay Chain | | Completion Proof | - | - | Time-Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifier and Whistleblower | Checker Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |
Importance
Availability of cluster and parallel computing
The distributed computing framework implements GPU clusters to improve training efficiency and scalability. Most projects have now integrated clusters for parallel computing. io.net has collaborated with other projects to deploy over 3,800 clusters in the first quarter of 2024. Although Render does not support clusters, it decomposes a single frame to be processed simultaneously across multiple nodes. Phala currently only supports CPU, but allows CPU worker clustering.
Data Privacy
Protecting sensitive datasets is crucial. Most projects use data encryption to safeguard privacy. io.net collaborates with Mind Network to launch fully homomorphic encryption (FHE), allowing encrypted data to be processed without decryption. Phala Network introduces a Trusted Execution Environment (TEE), preventing external processes from accessing or modifying data.
Calculation completion proof and quality inspection
Different projects use various methods to generate completion certificates and conduct quality checks. Gensyn and Aethir generate certificates indicating that the work has been completed and conduct quality checks. The certificates from io.net indicate that GPU performance is fully utilized. Render suggests using a dispute resolution process. Phala generates TEE certificates to ensure that AI agents perform the required operations.
Hardware Statistics
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|--------|--------|-------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 Cost/Hour | - | $1.37 | $1.50 | $0.55 ( expected ) | $0.33 ( expected ) | - |
Requirements for high-performance GPUs
AI model training requires the best-performing GPUs, such as Nvidia's A100 and H100. The H100 has 4 times the inference performance of the A100, making it the preferred GPU. Decentralization GPU market providers need to offer lower prices and meet the actual market demand. io.net and Aethir have obtained more than 2000 units of H100 and A100, which are more suitable for large model computations.
Decentralization GPU service costs have fallen below centralized services. Although the memory of GPU clusters connected to the network is limited, they still appeal to users with dynamic workload demands or those requiring flexibility.
Provide consumer-grade GPU/CPU
CPUs also play an important role in training AI models. Consumer-grade GPUs can be used for fine-tuning pre-trained models or for small-scale training. Projects like Render, Akash, and io.net also serve this market, developing their own niche.
Conclusion
The AI DePIN field is still relatively new and faces challenges. However, the number of tasks executed on these networks and the hardware has significantly increased, highlighting the demand for alternatives to Web2 cloud providers. In the future, these decentralized GPU networks will play a key role in providing developers with cost-effective computing alternatives, making significant contributions to the future landscape of AI and computing infrastructure.