Interpretation of Vitalik's new article: Why are Rollups with inefficient use of Blob space trapped in a development dilemma?
Written by: Haotian
How to understand @VitalikButerin's new article on Ethereum's scalability considerations? Some say Vitalik is calling for Blob inscriptions, which is quite far-fetched.
So how do Blob data packets work? Why is Blob space not being efficiently utilized after the Cancun upgrade? Is DAS (Data Availability Sampling) preparing for sharding?
In my view, after the Cancun upgrade, the performance is usable, and Vitalik is worried about the development of Rollups. Why? Next, I will share my understanding:
1) It has been explained multiple times before that a Blob is a temporary data packet that is decoupled from EVM calldata and can be directly accessed by the consensus layer. The direct benefit is that the EVM can execute transactions without accessing Blob data, thus incurring lower execution layer computation costs.
Currently, balancing a series of factors, the size of one Blob is 128k, and a single Batch can carry at most two Blob data packets to the mainnet. Ideally, a mainnet block aims to carry about 16MB, which is approximately 128 Blob data packets.
Therefore, Rollup project teams need to balance the number of Blob blocks, TPS transaction capacity, and storage costs for Blob mainnet nodes as much as possible, aiming to use Blob space with the best cost-effectiveness.
Taking @Optimism as an example, there are about 500,000 transactions per day, averaging one transaction to the mainnet every 2 minutes, carrying 1 Blob data packet each time. Why carry 1? Because the TPS is limited, and there’s no need to carry two, as each Blob's capacity would not be fully utilized, which would unnecessarily increase storage costs.
As the transaction volume of Rollup chains increases, for example, if 50 million transactions need to be processed daily, what should be done? 1. Compress the transaction volume of each Batch to maximize the amount of transactions in Blob space; 2. Increase the number of Blobs; 3. Shorten the frequency of Batch transactions;
2) Due to the data capacity of mainnet blocks being affected by Gas Limit and storage costs, 128 Blobs per block is an ideal state, but currently, not that many are needed. Optimism only uses 1 Blob every 2 minutes, leaving plenty of room for layer 2 project teams to increase TPS, expand market user base, and enhance ecosystem prosperity.
Therefore, for a period after the Cancun upgrade, Rollups are not "competing" in terms of the number and frequency of Blob usage, as well as bidding for Blob space.
The reason Vitalik mentioned Blobscription inscriptions is that this type of inscription can temporarily increase transaction volume, leading to increased demand for Blob usage, thus expanding capacity. Using inscriptions as an example can help deepen the understanding of how Blobs work, but what Vitalik truly wants to express is not closely related to inscriptions.
In theory, if a layer 2 project frequently and in high volume batches transactions to the mainnet, filling each Blob block, as long as they are willing to bear the high costs of fake transaction batches, it would affect the normal use of Blobs by other layer 2s. However, currently, it’s like someone buying computing power to conduct a 51% hard fork attack on BTC; theoretically feasible, but practically lacking incentive.
Thus, the Gas fees for layer 2 will remain stable in a "lower" range for a long time, providing a long-term "golden development window" for the layer 2 market to "increase troops and stockpile supplies."
3) So, what if one day the layer 2 market thrives to a certain extent, and the transactions batch to the mainnet become massive, but the current Blob data packets are insufficient? Ethereum has already provided a solution: using Data Availability Sampling (DAS):
Simply put, it means that data that originally needed to be stored by one node can now be distributed across multiple nodes simultaneously. For example, each node stores 1/8 of all Blob data, and 8 nodes form a group to meet DA capacity, effectively expanding the current Blob storage capacity by 8 times. This is also something that will be done in the future Sharding phase.
However, Vitalik has repeatedly emphasized this, which seems to be a reminder to many layer 2 project teams: stop complaining about Ethereum's expensive DA capabilities; you haven't even fully utilized the Blob data packet capacity with your current TPS. Hurry up and ramp up efforts to build the ecosystem and expand user and transaction volume, instead of always thinking about escaping DA to do one-click chain issuance.
Later, Vitalik added that he believes currently in core rollups, only Arbitrum has reached stage 1. Although @DeGateDex, Fuel, and others have reached Stage 2, they are not yet familiar to a broader audience. Stage 2 is the ultimate goal for Rollup security, and very few Rollups have reached Stage 1, while most are still in Stage 0, indicating that the development of the Rollup industry is indeed a concern for Vitalik.
4) In fact, purely regarding the scalability bottleneck issue, there is still a lot of room for improvement in the performance of Rollup layer 2 solutions.
Use data compression to more efficiently utilize Blob space. OP-Rollup currently has a dedicated compressor component for this work, while ZK-Rollup's off-chain compression of SNARK/STARK proofs submitted to the mainnet is also a form of "compression";
Minimize layer 2's reliance on the mainnet, using optimistic proof technology to ensure L2 security only in special cases. For example, Plasma keeps most data on-chain, but in deposit and withdrawal scenarios, it occurs on the mainnet, allowing the mainnet to promise its security.
This means that layer 2 should only consider strongly associating important operations like deposits and withdrawals with the mainnet, thus reducing the burden on the mainnet while enhancing L2's own performance. The previously mentioned Sequencer's parallel processing capability, filtering and categorizing large volumes of transactions off-chain, as well as the hybrid Rollup promoted by @MetisL2, where normal transactions go through OP-Rollup and special withdrawal requests go through ZK Route, all have similar considerations.
The above
It should be said that Vitalik's article on Ethereum's future scalability solutions is very enlightening. Especially his dissatisfaction with the current state of layer 2 development, his optimistic affirmation of Blob performance potential, and his outlook on future sharding technology, even pointing out some directions for optimization that layer 2 should consider.
In fact, the only uncertainty left for layer 2 is how to accelerate development?