From the history of AI development, the concept of IO.NET under "AI + DePin"
Author: Biteye Core Contributor Fishery
Editor: Biteye Core Contributor Crush
Community: @BiteyeCN
io.net is a decentralized AI computing power platform based on Solana developed by IO Research, achieving a $1 billion FDV valuation in its latest funding round.
In March of this year, io.net announced the completion of a $30 million Series A funding round led by Hack VC, with participation from Multicoin Capital, 6th Man Ventures, Solana Ventures, OKX Ventures, Aptos Labs, Delphi Digital, The Sandbox, and Sebastian Borget from The Sandbox.
io.net focuses on aggregating GPU resources for AI and machine learning companies, aiming to provide services at lower costs and faster delivery times. Since its launch in November last year, io.net has grown to over 25,000 GPUs and has processed over 40,000 compute hours for AI and machine learning companies.
The vision of io.net is to build a global decentralized AI computing network, creating an ecosystem between AI and machine learning teams/companies and powerful GPU resources around the world.
In this ecosystem, AI computing resources become commoditized, alleviating concerns for both supply and demand due to resource scarcity. In the future, io.net will also provide access to the IO model store and advanced inference capabilities, such as serverless inference, cloud gaming, and pixel streaming services.
01. Business Background
Before introducing the business logic of io.net, we need to understand the decentralized computing power sector from two dimensions: one is the development history of AI computing, and the other is to look at past cases that also used decentralized computing power.
The Development History of AI Computing
We can outline the trajectory of AI computing through several key time points:
1. Early Machine Learning (1980s - Early 2000s)
During this period, machine learning methods mainly focused on simpler models, such as decision trees and support vector machines (SVM). The computational demands of these models were relatively low, allowing them to run on personal computers or small servers of the time. Datasets were relatively small, and feature engineering and model selection were key tasks.
Time Frame: 1980s to early 2000s
Computational Requirements: Relatively low, personal computers or small servers could meet the demand.
Computational Hardware: CPU dominated computing resources.
2. The Rise of Deep Learning (2006 - Present)
In 2006, the concept of deep learning was reintroduced, marked by the research of Hinton and others. The subsequent successful application of deep neural networks, particularly convolutional neural networks (CNN) and recurrent neural networks (RNN), marked a breakthrough in this field. This stage saw a significant increase in computational resource demands, especially when processing large datasets in image and speech recognition.
Key Events:
ImageNet Competition (2012): AlexNet's victory in this competition was a landmark event in deep learning history, showcasing the immense potential of deep learning in image recognition for the first time.
AlphaGo (2016): Google DeepMind's AlphaGo defeating world Go champion Lee Sedol marked one of the most prominent moments for AI, demonstrating deep learning's application in complex strategy games and proving its capability in solving highly complex problems.
Computational Requirements: Significantly increased, requiring more powerful computational resources to train complex deep neural networks.
Computational Hardware: GPUs began to become the key hardware for deep learning training due to their superior parallel processing capabilities compared to CPUs.
3. The Era of Large Language Models (2018 - Present)
With the emergence of BERT (2018) and GPT technologies (post-2018), large models began to dominate the AI landscape. These models typically have billions to trillions of parameters, pushing the demand for computational resources to unprecedented levels. Training these models requires a large number of GPUs or specialized TPUs, along with significant power and cooling facilities.
Time Frame: 2018 - Present.
Computational Requirements: Extremely high, requiring a large number of GPUs or TPUs to scale, supported by corresponding infrastructure.
Computational Hardware: In addition to GPUs and TPUs, specialized hardware optimized for large machine learning models has begun to emerge, such as Google’s TPUs and Nvidia’s A and H series.
Looking at the exponential growth in AI's demand for computational power over the past 30 years, early machine learning had low demands, while the deep learning era increased those demands, and AI large models further pushed this demand to the extreme. We have witnessed significant improvements in computing hardware in terms of both quantity and performance.
This growth is reflected not only in the expansion of traditional data centers and the performance improvements of hardware like GPUs but also in the high investment thresholds and substantial return expectations, making the competition among internet giants more public.
Traditional centralized GPU computing centers require expensive initial investments in hardware procurement (like GPUs themselves), data center construction or leasing costs, cooling systems, and maintenance personnel costs.
In contrast, the decentralized computing platform project established by io.net has a clear advantage in terms of setup costs, significantly reducing initial investment and operational costs, making it possible for small teams to build their own AI models.
Decentralized GPU projects utilize existing distributed resources, eliminating the need for centralized investments in hardware and infrastructure. Individuals and enterprises can contribute idle GPU resources to the network, reducing the need for centralized procurement and deployment of high-performance computing resources.
Furthermore, in terms of operational costs, traditional GPU clusters require ongoing maintenance, power, and cooling expenses. Decentralized GPU projects can distribute these costs across various nodes by leveraging distributed resources, thereby reducing the operational burden on any single organization.
According to io.net's documentation, io.net significantly reduces operational costs by aggregating underutilized GPU resources from independent data centers, cryptocurrency miners, and other hardware networks like Filecoin and Render. Coupled with Web3's economic incentive strategies, io.net has a substantial pricing advantage.

Decentralized Computing
Looking back in history, there have indeed been several decentralized computing projects that achieved significant success, attracting a large number of participants and producing important results even without economic incentives. For example:
Folding@home: This is a project initiated by Stanford University aimed at simulating the protein folding process through distributed computing to help scientists understand disease mechanisms, particularly diseases related to improper protein folding such as Alzheimer's and Huntington's disease. During the COVID-19 pandemic, the Folding@home project gathered massive computational resources to assist in researching the coronavirus.
BOINC (Berkeley Open Infrastructure for Network Computing): This is an open-source software platform that supports various types of volunteer and grid computing projects across fields such as astronomy, medicine, and climate science. Users can contribute idle computing resources to participate in various research projects.
These projects not only prove the feasibility of decentralized computing but also showcase its immense growth potential.
By mobilizing contributions of unused computing resources from various sectors of society, computational capabilities can be significantly enhanced. If the innovative Web3 economic model is incorporated, even greater economic efficiencies can be achieved. Web3 experience indicates that a reasonable incentive mechanism is crucial for attracting and retaining user participation.
By introducing an incentive model, a mutually beneficial community environment can be built, further promoting business scale and positively driving technological advancement.
Therefore, io.net can attract a wide range of participants to jointly contribute computing power through the introduction of incentive mechanisms, forming a powerful decentralized computing network.
The Web3 economic model and the potential of decentralized computing provide io.net with strong growth momentum, achieving efficient resource utilization and cost optimization. This not only promotes technological innovation but also provides value to participants, allowing io.net to stand out in the AI field with immense development potential and market space.
02. io.net Technology
Clusters
A GPU cluster refers to complex computations that connect multiple GPUs over a network, forming a collaborative computing cluster, significantly enhancing the efficiency and capability of processing complex AI tasks.
Cluster computing not only accelerates the training speed of AI models but also enhances the ability to handle large-scale datasets, making AI applications more flexible and scalable.
In the traditional internet training of AI models, large-scale GPU clusters are required. However, when considering shifting this cluster computing model to a decentralized approach, a series of technical challenges arise.
Compared to traditional internet companies' AI computing clusters, decentralized GPU cluster computing faces more issues, such as nodes potentially being spread across different geographical locations, which brings network latency and bandwidth limitation problems that may affect the synchronization speed of data between nodes, thus impacting overall computational efficiency.
Additionally, maintaining data consistency and real-time synchronization across various nodes is crucial for ensuring the accuracy of computational results. Therefore, it requires decentralized computing platforms to develop efficient data management and synchronization mechanisms.
Moreover, managing and scheduling dispersed computing resources to ensure that computational tasks can be effectively completed is also a challenge that decentralized cluster computing needs to address.
io.net has built a decentralized cluster computing platform by integrating Ray and Kubernetes.
Ray, as a distributed computing framework, is directly responsible for executing computational tasks across multiple nodes, optimizing the data processing and training processes of machine learning models, ensuring that tasks run efficiently on each node.
Kubernetes plays a key management role in this process, automating the deployment and management of container applications, ensuring that computing resources are dynamically allocated and adjusted based on demand.
In this system, the combination of Ray and Kubernetes creates a dynamic and elastic computing environment. Ray ensures that computational tasks can be efficiently executed on appropriate nodes, while Kubernetes guarantees the stability and scalability of the entire system, automatically handling the addition or removal of nodes.
This synergy allows io.net to provide coherent and reliable computing services in a decentralized environment, meeting users' diverse needs in both data processing and model training.
Through this approach, io.net not only optimizes resource usage and reduces operational costs but also enhances system flexibility and user control. Users can easily deploy and manage various scales of computational tasks without worrying about the specific configuration and management details of the underlying resources.
This decentralized computing model, leveraging the powerful capabilities of Ray and Kubernetes, ensures the efficiency and reliability of the io.net platform in handling complex and large-scale computational tasks.
Privacy
Due to the complexity of the task scheduling logic in decentralized clusters compared to that in traditional data center clusters, and given that the transmission of data and computational tasks over the network increases potential security risks, decentralized clusters must also consider security and privacy protection.
io.net enhances network security and privacy by utilizing the decentralized characteristics of mesh private network channels. In such a network, the absence of a central point or gateway significantly reduces the risk of single points of failure; even if some nodes encounter issues, the entire network can continue to operate.
Data is transmitted along multiple paths within the mesh network, making it more difficult to trace the source or destination of the data, thereby enhancing user anonymity.
Furthermore, by employing techniques such as data packet padding and traffic obfuscation, mesh VPN networks can further obscure the patterns of data flow, making it difficult for eavesdroppers to analyze traffic patterns or identify specific users or data streams.
The privacy mechanisms of io.net effectively address privacy issues because they collectively create a complex and dynamic data transmission environment, making it difficult for external observers to capture useful information.
At the same time, the decentralized structure avoids the risk of all data passing through a single point, which not only improves the robustness of the system but also reduces the likelihood of attacks. Additionally, the multi-path transmission of data and traffic obfuscation strategies together provide an extra layer of protection for user data transmission, enhancing the overall privacy of the io.net network.
03. Economic Model
IO is the native cryptocurrency and protocol token of the io.net network, catering to the needs of two main entities within the ecosystem: AI startups and developers, as well as computing power providers.
For AI startups and developers, IO simplifies the payment process for cluster deployment, making it more convenient; they can also use IOSD Credits, pegged to the US dollar, to pay for transaction fees related to computational tasks on the network. Each model deployed on io.net requires a small IO coin transaction for inference.
For suppliers, particularly GPU resource providers, IO ensures that their resources receive fair compensation. Whether through direct earnings when GPUs are rented or passive earnings from participating in network model inference during idle times, IO provides rewards for every contribution of GPUs.
In the io.net ecosystem, IO is not only a medium for payment and incentives but also a key governance token. It makes every aspect of model development, training, deployment, and application development more transparent and efficient, ensuring mutual benefits among participants.
Through this mechanism, IO not only incentivizes participation and contributions within the ecosystem but also provides a comprehensive support platform for AI startups and engineers, promoting the development and application of AI technology.
io.net has put significant effort into its incentive model to ensure that the entire ecosystem can operate in a positive cycle. The goal of io.net is to establish a direct hourly rate for each GPU card in the network, expressed in US dollars. This requires providing a clear, fair, and decentralized pricing mechanism for GPU/CPU resources.
As a bilateral market, the core of the incentive model aims to address two major challenges: on one hand, reducing the high costs of renting GPU/CPU computing power, which is crucial for expanding AI and ML computing power demand indicators; on the other hand, addressing the shortage of GPU nodes available for rent among GPU cloud service providers.
Therefore, in terms of design principles, considerations on the demand side include competitor pricing and availability to provide competitive and attractive options in the market, adjusting pricing during peak times and resource shortages.
On the supply side of computing power, io.net focuses on two key markets: gamers and cryptocurrency GPU miners. Gamers possess high-end hardware and fast internet connections but typically only have one GPU card; while cryptocurrency GPU miners have a large number of GPU resources, they may face limitations in internet connection speed and storage space.
Thus, the pricing model for computing power includes multi-dimensional factors such as hardware performance, internet bandwidth, competitor pricing, supply availability, peak time adjustments, commitment pricing, and location differences. Additionally, it needs to consider the best profits when hardware is used for other proof-of-work cryptocurrency mining.
In the future, io.net will further provide a completely decentralized pricing scheme and create a benchmarking tool similar to speedtest.net for miner hardware, establishing a fully decentralized, fair, and transparent market.
04. Participation Methods
io.net has launched the Ignition event, the first phase of the io.net community incentive program, aimed at accelerating the growth of the IO network.
The program consists of three independent reward pools.
Worker Rewards (GPU)
Galaxy Task Rewards
Discord Role Rewards (Airdrop Tier Role)
These three reward pools are completely independent, and participants can earn rewards from each of the three pools without needing to associate the same wallet with each reward pool.
GPU Node Rewards
For nodes that have already connected, airdrop points will be calculated based on the total time employed from November 4, 2023, to April 25, 2024, when the event ends. At the end of the Ignition event, the airdrop points earned by users will be converted into airdrop rewards.
Airdrop points will consider four aspects:
A. Ratio of Job Hours Done (RJD) - Total duration of employment from November 4, 2023, until the end of the event.
B. Bandwidth (BW) - Nodes' bandwidth will be classified based on bandwidth speed ranges:
Low Speed: Download speed 100MB/second, upload speed 75MB/second.
Medium Speed: Download speed 400MB/second, upload speed 300MB/second.
High Speed: Download speed 800MB/second.
C. GPU Model (GM) - Points will be determined based on the GPU model, with higher-performing GPUs earning more points.
D. Uptime (UT) - Total successful running time from the connection of the Worker on November 4, 2023, until the end of the event.
It is worth noting that airdrop points are expected to be viewable by users around April 1, 2024.
Galaxy Task Rewards (Galxe)
Galaxy Task connection address: https://galxe.com/io.net/campaign/GCD5ot4oXPAt
Discord Role Rewards
This reward will be overseen by the community management team of io.net, requiring users to submit the correct Solana wallet address in Discord.
Users will receive corresponding Airdrop Tier Role levels based on their contributions, activity, and participation in other activities such as content creation.
05. Conclusion
In summary, io.net and similar decentralized AI computing platforms are opening a new chapter in AI computing. Although they face challenges related to technical implementation complexity, network stability, and data security, io.net has the potential to fundamentally change AI business models. It is believed that as these technologies mature and the computing power community expands, decentralized AI computing may become a key force driving AI innovation and adoption.












