Talk about Avalanche and its future
Author: CryptoYC Tech
Avalanche Consensus Protocol
The most fundamental task of a chain is to ensure network security, which involves consensus. Therefore, when we understand Avalanche, the first thing we need to introduce is the Avalanche consensus protocol.
A General Consensus Engine
First, it is important to clarify that Avalanche was originally a general consensus protocol. The decentralized platform we are familiar with as "Avalanche" is actually composed of this consensus protocol plus a series of supporting facilities. The Avalanche consensus protocol can be summarized in one sentence: "By repeatedly sampling nodes in the network and collecting their responses to a proposal/transaction, it reaches a final consensus."
So essentially, Avalanche provides a Partial Order for related transactions, forming a set of non-conflicting transactions.
Of course, this description is too abstract, so let's illustrate what this consensus looks like with an example. Suppose a room is filled with people trying to reach a consensus on what to have for lunch (for simplicity, let's assume there are only two choices: pizza and barbecue). Generally, some people initially prefer pizza while others prefer barbecue. Our goal is to ultimately decide which one to eat. At this point, everyone randomly asks a portion of the people in the room which lunch option they prefer; if more than half choose pizza, they will also choose pizza, and vice versa for barbecue. As everyone repeats this process, more and more people will have the same preference in each round, and after enough rounds, a final consensus will be reached.
Let's look at this process in a more structured way:
Assume there are n people in the room, and before deciding what to eat, each time they randomly ask k people's preferences. When at least α people respond with the same preference, the inquiry ends, and they move to the next round, continuing until the results are consistent for β consecutive rounds, at which point the final choice of what to eat is determined.
This also explains a principle behind the establishment of this consensus: Random sampling-induced preference changes will lead the network to prefer one choice, which will result in a stronger preference for that choice until it becomes irreversible, allowing nodes to make a decision. We can see that this process is very similar to Bitcoin's probabilistic finality confirmation process, which we will discuss in detail later.
For a more visual experience of the consensus, you can visit its demo site:
https://tedyin.com/archive/snow-bft-demo/#/snow
The above is a simple overview of the consensus process. However, to understand how Avalanche has developed to its current state, we need to review its historical development. So next, let's look at the history of Avalanche's consensus development.
Historical Development of the Consensus Protocol
Slush
Initially, the protocol proposed by Avalanche was not called Avalanche but Slush, which adopted the UTXO model. This protocol implemented the basic functionality of reaching the consensus on "what to eat for lunch" as described above. In simple terms, the process is as follows:
At first, everyone has no preference and does not know whether to eat pizza or barbecue.
Due to certain triggering conditions, such as a node receiving a transaction, they develop a preference and begin to randomly select a small sample of k people to ask about their preferences, similar to the previous process.
However, the difference is that during this process, when a person without a preference is asked by someone with a preference, they will adopt the preference of the inquirer. For example, if I have no idea but someone comes to me and asks, "I'm having pizza today, what do you want to eat?" then I will also choose pizza. If I already have a preference, such as I love barbecue, I will respond with my own preference to the inquirer.
If more than α people respond with the same preference, and my own preference differs from this majority preference, I will also change to the majority preference. Then the inquiry process is repeated until X rounds of inquiries are completed, at which point the final preference is determined. This X is a safeguard to prevent the process from repeating indefinitely.
From this process, we can identify the characteristics of Slush:
- It only records the state of the current round and does not keep historical states, leading to the final Byzantine problem.
- The sample size for inquiries is small, and unlike other chains, it does not need to ask all nodes, resulting in efficient probabilistic finality.
- Repeated sampling amplifies random perturbations, completing the most important characteristic of finality confirmation.
- However, if malicious nodes intentionally change their preferences to differ from the majority preference and disrupt the balance, the overall security of the network will be greatly reduced. Thus, Slush is a non-Byzantine protocol, meaning it cannot tolerate the existence of malicious nodes.
Snowflake
Due to the inherent characteristics of Slush being insufficient to support a secure network consensus, Avalanche launched an upgraded protocol based on it, called Snowflake. This new protocol introduced a counter function, allowing each node to record the credibility of its current preference. After each inquiry to other nodes, if it receives a unified preference response from more than α nodes, the node's counter will increment by 1 for that preference; otherwise, it will reset to 0. When the counter reaches the threshold β, the node accepts the current preference and will no longer change it. The benefit of this is that nodes do not need to wait for x rounds to determine their preference, and the impact of malicious node information interference will also be reduced. Thus, Snowflake became a Byzantine fault-tolerant protocol.
However, it still had a problem: Snowflake can guarantee strong assurances for the minimal state (results can be determined with as few inquiries as possible), but its state recording is also relatively short-lived; the counter value resets every time the preference changes. Moreover, this state retention only applies to the node's own state, not to the historical state of the entire network. In other words, it cannot perform historical comparisons of the entire network's state, which still poses security issues. To address this, Avalanche further improved the protocol, which also became the cornerstone of Avalanche's future—Snowball.
Snowball
The improvements in the Snowball protocol are actually quite simple. Since both Snowflake and Slush cannot retain long-term states, why not allow the network to preserve multiple states and introduce a new variable to determine which is the correct state? Thus, Snowball introduced a confidence counter. In this way, every time an inquiry is successful (the other party has the same preference), the confidence counter increments by 1; if the other party's preference differs from mine, the one with the higher confidence counter is chosen. This enhances the reliability of the consensus results and improves the overall security of the network. Of course, from this perspective, Snowball can already achieve our security goals, but Avalanche does not stop there, leading to the final protocol we see today—Avalanche.
Avalanche
Avalanche introduced a dynamic and append-only DAG (Directed Acyclic Graph) structure on top of Snowball to enhance its efficiency and security. Let's look at the Avalanche DAG structure through a diagram:
From the above diagram, we see that the so-called "dynamic and append-only DAG" means that new nodes (nodes in the DAG, not the nodes we consider participating in consensus) can only be added at the end and cannot be added before existing nodes.
At the same time, it is important to clarify a concept here: ancestors and descendants. Any node added after a node that is connected to it is its descendant, while the node itself is the ancestor. According to the diagram, b, c, d, and e are all descendants of a, while d and e are descendants of c, but e is not a descendant of d.
However, it can be seen that if transactions are used to form a DAG, this structure does not handle "conflicting transactions," which is similar to the double-spending problem. Therefore, Avalanche stipulated that only one transaction can be included in the DAG for conflicting transactions, and each node can only prefer one transaction from the conflict set. Of course, this applies not only to transactions but to all conflicting proposals.
Understanding this concept allows us to explore another feature added by Avalanche—transitive voting, which allows voting for descendants while also voting for all ancestors. This is also a highly efficient way to resolve conflicting transactions (such as double spending).
For example, suppose we are currently running an Avalanche network with the following parameters: the sample size randomly selected each time is k=4, the threshold for a single inquiry is α=3, and the number of consecutive successful inquiries required is β=4.
First, let's look at how this network operates under normal circumstances: if I receive a transaction Y and broadcast it to the selected sample nodes while inquiring about preferences, the result should be that these nodes will have a majority preference, as follows:
We can see that it received three yes votes and one no vote. This means it has received confirmation in this round of inquiry, so the legitimacy of this transaction is updated to true. The DAG updated by that node is as follows:
Here, let's define a few variables: Chit, the legitimacy of the transaction in this inquiry; confidence, the credibility of the transaction; consecutive success is the record of how many consecutive rounds have confirmed legitimacy. Here, chit is a boolean value, either true or false; confidence increments by 1 whenever the transaction is confirmed as legitimate in a round of inquiry, and its ancestors also increment by 1, passing through to the ancestor that confirms finality. Consecutive successes increment by 1 for each consecutive round confirmed as legitimate; if an illegal or inconclusive round occurs, it resets to zero. The same applies to ancestors.
Now, we can return to this example itself. We find that this DAG consists of transactions V, W, X, and Y. Although this round inquires about transaction Y, because V, W, and X are its ancestor nodes.
So according to Avalanche's principle of "voting for descendants also means voting for ancestors," its ancestor nodes will increment their respective confidence and consecutive success values, as shown in the diagram. At the same time, since the number of consecutive successful inquiries required is β=4, we can see that transaction V's β has already reached 4, so the finality of transaction V is confirmed and will not participate in the next round of scoring.
What happens if a conflicting transaction Y' occurs at this point?
We will still follow the previous process, but suppose Y' is rejected by everyone. Then according to the update rules for the parameters we discussed, what will my DAG look like this time? As follows:
The update rules for each value are the same as in the previous round, so I won't elaborate further. Here, it is important to note Tx W, because its new descendant Y' is a conflicting transaction and has been rejected, which will cause its consecutive success to reset to zero, while confidence decreases by 1.
Thus, we can see that through the DAG data structure, the efficiency of finality confirmation is very high, and since it does not use a single state replicator but allows nodes to maintain their own state machines independently, with independent state transitions that can ultimately synchronize, its security is very high.
At this point, we have analyzed the entire development history and characteristics of the Avalanche protocol. Now we can make a simple summary.
Consensus Summary
Using BTC's mechanism to perform public chain functions (utilizing consecutive success + confidence as a means of finality confirmation, avoiding the inefficiency of PoW). At the same time, it separates the consensus layer from the application layer, significantly enhancing performance and scalability.
Two major innovations, subsampling (its consensus voting does not involve all nodes but randomly selects nodes for participation each time) & transitive voting (voting for descendants also means voting for ancestors), allowing for high-speed response and finality confirmation regardless of the network scale. Therefore, the subnets it launches have great feasibility, which we will discuss later.
There are also drawbacks, it cannot handle off-chain transactions, or it is very difficult to handle off-chain transactions.
Of course, the above describes how they theoretically designed the consensus; ultimately, we need to implement it technically. Avalanche Labs has made some optimizations to the design. Let's take a brief look.
Optimizations in Avalanche's Engineering Implementation
Introducing Vertices
If we strictly follow the design in the white paper, the selected nodes need to vote to confirm each transaction, which will inevitably affect the overall efficiency of the network when a large number of transactions occur. To reduce this situation, Avalanche Labs introduced the concept of "vertices," similar to blocks.
Specifically, when a node receives a new transaction, it does not directly broadcast the transaction to the network for inquiry but instead packages the transaction into a "vertex." Each vertex can contain a large number of transactions, and the nodes selected each round now vote on the vertices, which is considered a vote on all transactions contained within the vertices. Thus, the nodes of the Avalanche DAG are composed of vertices that contain a large number of transactions rather than individual transactions.
At the same time, in the actual inquiry, the node does not ask "Do you like this vertex?" but rather "Which vertex do you prefer compared to this vertex?" This way, the responses from other nodes are collections of transactions they consider more legitimate, eliminating the need for the node to sort them and directly updating the DAG results. This further improves efficiency. Especially since it does not delete illegal vertices but renders them invalid, this is different from Bitcoin and Ethereum, which delete bad blocks.
If a vertex contains malicious transactions, the entire vertex will be rejected. Other legitimate transactions within that vertex will be packaged into the next vertex. At the same time, since finality confirmation does not occur with the issuance of a single vertex but has a certain delay, nodes containing illegal transactions and all their descendant nodes will be rejected. The legitimate transactions that are collateral damage will be packaged into the next vertex and re-voted on.
Node Staking Model Matching the Algorithm
We all know that Avalanche also uses a PoS mechanism to allow nodes to participate in security maintenance, but we also see that Avalanche's mechanism involves random sampling inquiries rather than querying all nodes. This raises a question: what differences will nodes with different staking amounts have in this mechanism? Avalanche's answer is simple:
The more a node stakes, the higher the likelihood it will be randomly selected as an inquiry target. The more frequently it is selected, the faster the responses, and if it is a benign node, it will receive substantial rewards.
In addition to the fact that both node staking and delegated staking have a lock-up period (1-2 years for nodes, 2 weeks to 1 year for delegation), Avalanche also stipulates that the amount of delegated staking a node accepts cannot exceed a multiple of its own staking amount, and there is also a cap. However, we all know that governance parameters can be adjusted at any time, so we won't specify the exact values.
Overall Architecture
I don't need to say much about the overall architecture; everyone should be quite familiar with it. Its structure mainly consists of the three official subnets launched. The structural diagram has been seen many times, so I'll just paste it here:
Image source: https://docs.avax.network/
Here are a few points that need clarification:
- Snowman is an optimized chain for smart contracts based on the Avalanche consensus; its consensus is still the Avalanche consensus protocol, so it is not separately mentioned.
- We can talk about subnets, also known as dedicated node validation networks:
Avalanche has recently been strongly promoting subnets. The most critical point is about high customizability and security. A node can serve as a node for multiple subnets, freely choosing, while subnets can also set their own thresholds, such as hardware requirements, staking amounts, etc. However, in any case, subnet nodes must also be mainnet nodes. At the same time, Avalanche recommends that third-party subnet gas fees be the same as the official gas fees, either fixed or zero.
Subnets are also an important factor in Avalanche's infinite scalability. Because each node can participate in multiple subnets, as long as the node's own hardware conditions are adequate, the overall network speed will increase with the number of subnets. For example, if a node supports a subnet with a speed of 1000 TPS, supporting three subnets would make the entire network 3000 TPS. Moreover, since the prerequisite for becoming a subnet node is to be a mainnet node, theoretically, the more subnets there are, the more secure Avalanche becomes. The more mainnet nodes there are, the more nodes can directly serve the subnets, reducing the barriers for subnets, creating a mutually reinforcing positive cycle.
Assets between subnets can ideally transfer seamlessly and quickly, but due to the current scarcity of third-party subnets, the specific situation still needs observation. Additionally, how each subnet designs its token economy to incentivize nodes to participate is also a matter that needs research.
Some may ask if this means that subnets share security. I personally believe this statement is problematic. Although nodes can share, the specific nodes in each subnet and the rules of each subnet are different. For example, are malicious nodes slashed? Or are they just disqualified from the node (the Avalanche mainnet follows the latter)? These are issues. However, this does not mean that the security of subnets is entirely unprotected. This involves many games; for instance, if a subnet's rule is that disqualification affects a node's functionality in other subnets, then nodes must consider many factors when acting maliciously. Not to mention that becoming a node requires KYC/AML.
Currently, the main traffic is on the C chain, and DeFi and NFTs are inseparable from it. However, looking at the relationships among these three official subnets, it is still necessary to consider the ecosystem. So let's directly look at Avalanche's ecosystem.
Current Ecosystem Status
There isn't much to say about this; various data websites have it. I'll just paste the data from DefiLlama.
Basic Data
As of April 11, Avalanche has 186 projects. According to DefiLlama, the current TVL of AVALANCHE is 14.88 billion (about one-tenth of Ethereum), which has decreased significantly from its peak of 23.88 billion. Moreover, it is evident that the most important application in the entire ecosystem currently is Aave. This is not difficult to understand; we all know that the most crucial infrastructure in a public chain is DeFi infrastructure, such as DEX, lending, yield aggregators, etc. The established projects on Ethereum are already quite large, and their market education has been quite effective, especially since established projects are mostly DeFi projects. Since they are already market leaders, why not leverage them? The cost is much lower than building these infrastructures from scratch.
Image source: https://defillama.com/chain/Avalanche
Therefore, not only Aave, but the projects ranking high in terms of volume in the entire Avalanche ecosystem are mostly familiar DeFi projects or forks of established DeFi projects, along with the aforementioned yield products. Of course, many projects in this chart are not only on Avalanche, but even so, if you click on the TVL distribution of each project, the ranking remains the same.
We can see that the top ten trading volumes on Avalanche are basically P2E projects.
What is the Future of Avalanche?
After reviewing the basic data, we can now ponder a question: If Avalanche is just like other public chains without its own ecological characteristics, how can it continue to develop? Especially since the current projects are still using the C chain and not utilizing their unique subnets, can Avalanche leverage its subnet feature to do something?
To address this question, we first need to consider what a new public chain should do to maintain competitiveness. The answer is quite simple:
"Attract investment, draw in traffic, and attract external funds." In blockchain terms, this translates to "attracting funds, increasing TVL, and enhancing the overall liquidity of the ecosystem." The traditional approach is to introduce more DeFi projects, but today's DeFi users are quite savvy, and funds attracted this way tend to be short-term (withdrawn quickly after enjoying initial benefits, leading to rapid declines in TVL, as can be seen from Avalanche's TVL changes).
Moreover, the barriers for newcomers to join DeFi are not low, such as planning LP losses and gains, calculating staking risks and returns, etc. On the other hand, ordinary PFP NFTs require high operational standards from project parties, and even if an excellent PFP project emerges, the liquidity it brings is not comparable to DeFi. Additionally, most PFPs currently do not require the unique subnet functionalities (Ethereum and other chains or Avalanche's own C chain are sufficient to meet PFP requirements).
At this point, we can see that Game-Fi is a very good entry point. Most GAME-FI projects are essentially DeFi, completing LP provision and staking (buying game NFTs, items, etc.) through gamified interactions. This not only provides good liquidity but also offers a reasonable "reason" to lock in funds. Furthermore, the combination of subnets and Game-Fi can create a win-win situation:
For current Game-Fi projects, regardless of their playability, the on-chain interactions are not on the same level as other applications. Especially with the recent emergence of some Game-Fi projects resembling traditional games, the performance requirements for public chains are very high. Avalanche, with its 1-2 second finality confirmation and customizable VMs, is an ideal choice.
The advantages of Avalanche subnets make Game-Fi a relatively low-cost trial-and-error form.
Moreover, there are some dominant Game-Fi projects, such as DeFi Kingdom, whose scale can rival small public chains. Their interaction demands can also match those of small public chains. At the same time, how to better utilize their substantial scale is also a concern for them. This is where Avalanche's subnet comes into play. Anyone can create their own "blockchain" compatible with EVM, with low migration costs. Once this is done, the tokens within their Game-Fi become a form of underlying asset (interaction fees can be paid directly with their tokens), allowing their token's value capture ability to surpass that of the game itself. Their Game-Fi also has the potential to evolve into an ecosystem/xverse, especially since theoretically, assets between subnets can transfer seamlessly, representing the interoperability of xverse established through subnets is far superior to that of projects on other blockchains, ultimately forming an independent yet interconnected ecosystem and building its competitive barrier.
I believe Avalanche is also considering this; for example, on March 8, it launched Avalanche Multiverse (providing approximately $290 million in AVAX (4 million) to incentivize subnet growth), and the first subnet launched in March was DFK, which received $15 million in Avalanche Multiverse funding. After going live, DFK indeed turned its JEWEL into subnet transaction fees and issued a new token, CRYSTAL, as a universal asset within the game. Recently, there have been reports that Crab is also preparing to venture into Avalanche, so we can wait and see what Avalanche will ultimately become.
Another important point is that even if the tokens used in subnets are not AVAX, because subnet nodes must also be mainnet nodes, token nodes must purchase AVAX. As the number of subnets increases, the number of nodes will also increase. Coupled with AVAX's mint/burn mechanism, the value capture of AVAX tokens will not diminish significantly.
It can be seen that for Avalanche to maintain long-term competitiveness, it must effectively leverage the advantages of its subnets. Whether it will form a unique x-verse chain different from other chains remains to be seen.
Comparison with Polkadot and Cosmos A brief comparison of the characteristics of the three networks:
I have already discussed the detailed comparison between Cosmos and Polkadot in a recent article, which you can check out in “The Heterogeneous Dual Kings”. The design paradigm of Avalanche is something that aligns more closely with the native characteristics of blockchain. It ensures freedom while emphasizing interoperability, and the relationship between its subnets and mainnet ensures that its scalability and efficiency will not diminish rapidly as the scale increases. Compared to some public chains, its efficiency enhancement solutions are indeed feasible.
Conclusion
Today's summary is relatively simple. Avalanche inherits the mechanism of probabilistic finality confirmation from Bitcoin but adds its own characteristics from the ground up, especially with the introduction of subnets, which adds to its "bottom-up" nature. This quality aligns more with my view of complex systems: we are always in a complex system, and a complex system must be formed from the bottom up.
However, this does not mean that any application is suitable for having its own subnet; rather, Avalanche gives you more choices. If it can leverage the advantages of its subnets, I personally believe Avalanche will develop some very interesting things.