Dialogue with Taiko co-founder Daniel: How will the zk L2 scaling competition evolve in the end?
Author: 0xRJ.eth
Full text sourced from Wang Dong's recent interview: link
Summary:
- Founder's Journey
- Reasons Behind Taiko's Decentralization Choices (and the balance found between security, efficiency, and decentralization)
- The Endgame of Ethereum Scaling
- Development Directions of zk L2 Track "Classification Standards"
- Impact of the Cancun Upgrade on L2 Competitive Landscape: Pros and Cons
- Opportunities for Taiko to Overtake
- Reflecting on these years of experiences, judgments, choices, and thoughts, what do you think is the best decision you've made? And what is your biggest regret?
Founder's Journey
0xRJ.eth: Hello everyone, I am RJ. Today, I am very pleased to invite Daniel, the founder and CEO of Taiko, an Ethereum Layer 2 scaling network based on zero-knowledge proofs, to have a conversation about the endgame of Ethereum scaling, the competitive landscape of Layer 2, and its development directions.
Daniel is a former senior engineer at Google and a former senior R&D director at JD.com, and he is also the founder of Loopring, one of the earliest trading matching protocols based on ZK Rollup on Ethereum. I personally feel that it is very important to understand the views and thoughts of an OG in the Rollup track, especially in 2023, when the L2 track is receiving much attention and on the eve of Ethereum's significant upgrade, the Cancun upgrade.
Shall we start with your journey? I believe you are quite familiar to most veterans in the crypto space. Everyone knows you are the founder of Loopring. However, I think many may not understand how you transitioned from traditional tech giants like Google and JD.com to the crypto space, and what prompted you to start Loopring, and now to build a Layer 2 network based on ZK (zero-knowledge) proofs for Ethereum. What is your vision for these projects and public chains?
Daniel: Initially, my idea was to accumulate enough experience in a traditional Web2 company like Google and then start my own relatively large Web2 company. Many ambitious young people at that time thought this way. The prevailing mindset was "copy to China," meaning whatever was available abroad would be copied to China. Now, it's different; many Americans are copying things from China.
At first, I didn't have much idealism. But later, I chose blockchain partly because working in large traditional companies became a bit boring, especially in Chinese internet companies, which are not entirely technology-driven. As a tech person, I felt uncomfortable. There were various personnel and management issues, and blockchain provided a refreshing feeling. You suddenly realize that engineers can change the world. Of course, there is still a long way to go to change the world, but it ignited a small hope in me. So after spending some time at JD.com, I felt that the Web2 story didn't quite suit me, and I wanted to use technology to change the world.
At that time, the concept of blockchain wasn't widely discussed; Bitcoin was the main topic, discussing how Bitcoin could change traditional finance and whether it could allow people in third-world countries to have their own wallets. Before starting my first blockchain company, I visited Kenya and Nigeria in Africa, where people used their phones as their wallets. Of course, the money in their wallets was very little, and the transfer fees were extremely high. But they could make payments via SMS, so Africa had evolved a micro-payment solution, which, although it sounds rudimentary, had a huge demand and a very large market.
Around 2014, after my trip to Africa, I was inspired to try to work in blockchain, so my first blockchain company was a Bitcoin exchange. At that time, I hadn't really encountered the underlying blockchain technology; what I was doing was quite similar to what Coinbase and Binance do now, bringing Bitcoin to the masses.
It was only later, when I was working on Loopring, that I thought about whether to use blockchain technology to do things in the traditional financial sector that could be done, using blockchain to change the role of intermediaries in finance. Why do financial professionals earn so much? It's because the intermediary profits are enormous, and the fees are very high. You can see that large banks are very profitable, and financial companies and wealth management companies are also very profitable. So our entry point was to use blockchain to eliminate that middle layer, allowing end-users to transact directly with each other through a trustless mechanism.
The goal of Taiko is actually a bit more ambitious; I wonder if we can create a general platform for others to build applications on. This path has been a step-by-step exploration and accumulation before I dared to think this way. Although I have an engineering background, it would be unimaginable for me to create a (general) Layer 2 without experience, so the journey over these years has been very necessary.
Reasons Behind Taiko's Decentralization Choices (and the balance found between security, efficiency, and decentralization)
0xRJ.eth: Taiko's CTO and co-founder Brecht mentioned that the current scaling solution or choice you are making with Taiko actually evolved from a problem you faced during your time at Loopring.
Daniel: When I was at Loopring, we were actually thinking about creating an app. We were doing decentralized matching, and no matter how we changed our thinking, the fees were extremely high, sometimes reaching hundreds of dollars for a single transaction.
Then we asked ourselves, can this really change the world? Scaling was actually an idea that emerged from our initial entrepreneurial thoughts. When we were working on Loopring, we aimed to create dApps and decentralized matching. After some time, we found that the fees were very high, and we asked ourselves whether this solution could truly replace traditional exchanges. Traditional exchanges, of course, have many trust and security issues, but they are still very cheap.
So, cost is a very important factor for users. We found that many of the better projects at that time, like 0X, weren't very popular among users because people felt it was a good demo or prototype, but it couldn't really replace the existing Web 2 products. So we decided to try a different path, with the core idea being that we couldn't completely please everyone; we had to dare to do things that others hadn't thought of. At that time, we said our funding was limited, and we had very few people, maybe less than 20.
So we decided to use relatively little funding to try using zero-knowledge proofs for scaling, but we didn't know to what extent we could achieve that. At that time, we didn't even use the term ZK Rollup; even the term Rollup was extremely niche.
As we were nearing launch, I attended a Consensys event in Brooklyn, New York, where Vitalik was also present. During my keynote, I said that Loopring's performance was excellent, but many were skeptical, questioning whether a Chinese team could deliver. I presented many figures, but at that time, many Chinese teams were exaggerating their capabilities, claiming their new chains could handle millions of transactions per second.
I felt our reputation wasn't particularly good. The overall Chinese developer community was too exaggerated and too capital-driven. So at that conference, I stated that our performance was around a few hundred transactions per second, which was already a significant improvement over Ethereum's scaling. I mentioned that our costs were around 2 cents or 1 cent per transaction. Many people just listened and thought I was trying to deceive them. After my keynote, I met Vitalik, who was also interested in scaling.
At that moment, we realized that what we were doing was what he referred to as ZK Rollup, but ZK Rollup wasn't an idea that Vitalik came up with; it was an idea from Barry WhiteHat. However, no one was doing it at that time, and we were actually the first to create a relatively orthodox Rollup—honestly placing data on the Layer 1 network.
So this was born out of necessity, and inadvertently, we became the first ZK Rollup. However, at that time, we didn't think boldly enough; perhaps the technology wasn't mature enough, and the available code libraries were limited. The Rollup we created was relatively simple, and we made many trade-offs, including a complete lack of programmability.
The challenges were enormous; it wasn't that we didn't try; we simply didn't dare to think about it. There was an evolutionary process leading up to Taiko. Initially, I wanted to create a social network to get more people using blockchain and, to some extent, change people's lives.
But I asked myself, if we create this, where will it be placed? Currently, the best dApp platform is Ethereum, whether it's Ethereum Layer 2 networks or sidechains (though sidechains are no longer a common term). About a year and a half to two years ago, the only option was to use Ethereum along with familiar solutions like Optimistic Rollup.
These Optimistic Rollup chains are also relatively centralized, making them unsuitable for decentralized networks and social networks due to the need for censorship resistance. It wasn't possible to achieve that on those sidechains or L2s, so we thought we could either continue developing such an application and wait for a few years until L2s became more decentralized and secure, or we could leverage our experience from Loopring and try to create a general L2. This is why we decided to pursue a general L2; doing Rollup was actually a byproduct or a last resort that led us to this point.
Creating an app is very interesting; once you have an app, your community becomes active, with sharing, images, and videos. As a user, it's very comfortable to use, but building infrastructure is actually quite boring. You can tell stories about how many users it will support in the future, but in daily use, you won't be using that chain for transactions every day.
So my greatest interest is actually in creating social applications, but now I feel that since we've taken the path of general L2, it's also quite good, as it allows others to build various applications on top of it.
0xRJ.eth: I find this very interesting. It seems that because you initially aimed to create a protocol/dApp, the bottlenecks, difficulties, and needs you encountered from a developer's perspective actually represent the real demands of the market. This also explains why Taiko chose to be completely decentralized from the beginning, with a fully decentralized proposer mechanism. I think discussing your journey today has helped me better understand why Taiko has such a commitment.
Daniel: Yes, the degree of decentralization required for an app varies based on its characteristics and positioning. If it's just an app without the need for censorship resistance, there wouldn't be such a demand. Social networks have this need, which pushes us to do so. But there's another reason: I don't believe a Layer 2 can perform well in a centralized manner and then immediately become decentralized.
There are two paths to take: the first is to start decentralized and gradually improve performance. The second is to start centralized, perform well, and then gradually decentralize. I think the first path is relatively easier to take.
I wouldn't say it's very easy, but it's relatively easier. The second path is akin to a powerful cloud service like Alibaba Cloud trying to decentralize immediately, which is extremely difficult. It involves more changes to the underlying structure. So we said that even if our performance isn't as good as others, we must achieve true decentralization. At least from the perspective of protocol design, it should allow for decentralization.
However, whether it will be fully decentralized from day one after going live is also a question mark. After all, user asset security takes precedence over the degree of decentralization. So we still need to ensure that we decentralize cautiously, especially when the codebase is large, because decentralization means relinquishing control and responsibility. If a hacking incident occurs, saying "I couldn't do anything" isn't a good explanation that the community would accept. As a user, I wouldn't accept that.
I would prefer that you maintain control initially, so that if a hacking incident occurs, you can recover my assets, and then gradually relinquish your rights. But during this relinquishing process, the network's infrastructure shouldn't require major changes because it already supports decentralization. It just needs to be configured to gradually abandon centralized control. So I believe that Taiko won't be fully decentralized at the start because, for users, asset security is too risky.
0xRJ.eth: So it means finding a balance between security, efficiency, and decentralization, and adjusting (and maximizing) the degree of decentralization from different dimensions. For example, (first supporting complete decentralization in the overall mechanism design, and the speed of relinquishing power is more of a choice). After balancing these three aspects, Taiko can achieve the maximum degree of decentralization in the proposer mechanism. As far as I know, Taiko should be the first and currently the only Layer 2 that has been fully decentralized in the proposer aspect from Day 1 to now. (Essentially because it was designed for complete decentralization from the start.)
Daniel: We are also exploring this; it's actually difficult to quantify how decentralized a network is. You see that Ethereum's miners are also very centralized, like Lido's mining pool, which controls a significant amount of validator resources or influences validator resources. So it's hard to say who is more decentralized than whom. But I believe we allow for a greater degree of decentralization. Suppose more people are willing to participate in Taiko's block production or proof of these blocks; they are permissionless, meaning they can join without our permission. However, whether they compete or cooperate is difficult to control at the protocol layer. But if 100 provers are competing rather than cooperating, the protocol allows for that competition, which can drive fees down. Of course, there may be better supplementary designs in the protocol that can incentivize different provers to compete rather than cooperate.
Currently, we haven't found such a solution, but generally speaking, as long as you are permissionless, you allow for the possibility of decentralization. The second step is to strengthen that decentralization through incentives, ideally where neither party cooperates and both earn more. Whether this can become a reality is still uncertain; we don't know yet.
In terms of proposers, we might currently be the only base rollup. In fact, we are unlikely to have specialized Taiko proposers. Instead, L1 miners and validators will also serve as L2 proposers. This way, they can earn more, and the entire solution relies heavily on Ethereum, so we don't have to reinvent the wheel. Taiko doesn't need to have a validator pool; we don't need to create MEV solutions. Because these profit opportunities are currently being discussed by L1 validators regarding how L2 can do MEV, we are telling L1 validators that the MEV opportunities in L2 are for them. As long as they do it well and produce blocks in a timely manner, L2 users will benefit.
So that's the general idea. There won't be a comprehensive solution; instead, it will be small and beautiful. For example, we will try to reuse L1's infrastructure, which will allow us to move more nimbly. Otherwise, the team would need many people, and overly complex solutions increase the probability of errors.
The Endgame of Ethereum Scaling
0xRJ.eth: The design of this Type1 ZK-EVM is particularly clever for the team. It's very friendly for developers and, in terms of the long-term development of the entire project, I think the potential "benefits" are quite significant.
Daniel: We definitely believe that. What the final outcome will be, we will wait and see. Because from a longer-term perspective, whether it's Type1 or not isn't that important. In the future, all L2 standards for app developers should gradually converge, just like the current L3, L4, etc., which will gradually approach one or two standards.
For example, ZK-EVM may gradually develop a standard regarding which OP Codes to support and which not to support, what its state tree looks like, and ZK-VM may use a specific language, such as Starkware's language or another language that becomes very popular. The underlying VM (virtual machine) will become a competitive or complementary runtime environment with Ethereum's EVM. So there may be one or two standards, and all L2s will align with them.
In the future, the programming experience on Ethereum will become less important because applications on Ethereum may only be Rollups, and other applications may not be willing to pay the high transaction fees. So Ethereum's own programming experience will gradually not represent the true programming experience for all apps. I believe this will become an increasingly clear change. However, currently, Ethereum cannot completely abandon its role as an app platform. Because L2s haven't fully emerged yet, once they do, Ethereum may make certain op codes particularly expensive, making them unusable for ordinary users, only for L2s.
At that point, Ethereum may become more mature, and as L1, its programming experience should also be simplified, not everything needs to be done.
0xRJ.eth: You just mentioned "when all L2 Rollups emerge." When do you think that will be?
Daniel: My judgment is that it may take 5 to 10 years. Five years is quite fast; it may take ten years to replace Ethereum or all other L1s and become the default app development platform. Because zero-knowledge proofs are still evolving, and there aren't particularly satisfactory metrics or solutions in various aspects.
Recently, new ideas have emerged, so it needs to keep evolving. The codebase is also quite large, so this solution must be very mature, and the code must withstand the test of time while continuously optimizing; it's a dynamic process. I don't think in three or four years, any L2 will be able to say, "I can prove my security is exactly the same as Ethereum's."
0xRJ.eth: When you mentioned "replace," you don't mean replacing the Ethereum chain, right? It's more about replacing the functionality of "hosting dApps." However, aspects like DA (data availability) still rely on Ethereum, and security still depends on Ethereum.
Daniel: That's right. It's about transferring Ethereum's responsibility as an app platform to L2, making Ethereum a platform for L2 or a multi-chain secure aggregation platform. I still don't know how to describe that. But in general, ordinary users shouldn't always have to transfer through the central bank; they can transfer through local banks or commercial banks. Currently, the fees for transferring using (central bank) Ethereum are becoming increasingly high.
Development Directions of zk L2 Track "Classification Standards"
0xRJ.eth: If we focus more on all ZK L2 scaling solutions, do you think that in your imagination, this track will eventually align with what Vitalik envisioned in his article, where all types of ZK-EVM will gradually lean towards Ethereum, or will it be evenly distributed across Type1 to Type4, or will it present two extremes, Type1 and Type4?
Daniel: I think the so-called Type1 and Type4 actually have a reference benchmark, which is Ethereum's current programming experience. But I think we need to ask whether there will really only be this one reference benchmark in the future or multiple ones. For example, in traditional non-blockchain programming, you have the Java virtual machine and native code, which are at least two completely different programming solutions. In Ethereum, we can say that some L2s may gradually align with the EVM. However, the EVM on L2 may differ from the EVM on L1. Gradually, the L2 EVM will converge on this standard, so the so-called Type1 may differ from Ethereum's own programming.
There may also be another VM or multiple such VMs, like Starkware's VM, and everyone may lean towards that standard, resulting in multiple different standards that could be quite different from each other.
I believe that in the future, from the perspective of application layer platforms, all L2s may have different standards. One standard is the ZK-EVM that evolves from EVM, which will gradually converge to a standard between Type1 and Type2, but it will still have some differences compared to Ethereum's EVM.
For example, the Blob transaction will exist on Ethereum after the hard fork, with this op code available for use. However, you won't be able to use this on L2, which breaks the definition of Type1. Other engineers and teams are also researching types of ZK-VM, including Starkware's, which may gradually converge to one or two different standards.
As more developers join, there may be more standards because different people have different preferences for programming languages and models. Currently, everyone is interested in Solidity on Ethereum because it's simple and easy to use.
But in the future, I think there may not just be one or two standards; there could be more. Some may be more popular, while others may not be as popular, leading to very different programming experiences for apps. For example, some apps in the financial sector may use a more secure language to write financial apps.
Others may need higher performance, especially to optimize their computational load. So they might use languages that are more like C++ or Rust, which offer better performance but are more complex to program. These standards may gradually be established in the future, but currently, when we discuss these standards, the only standard we have is the one Vitalik mentioned, from Type1 to Type4. However, this standard still has limitations because its reference benchmark is Ethereum.
0xRJ.eth: Vitalik's premise for the standards assumes that Ethereum remains unchanged. Ethereum itself may undergo further adjustments in its programming aspects or its positioning.
Daniel: Ethereum has many upgrade and transformation plans for the future, such as Verkle Tree, Purge, and Splurge (different phases of Ethereum's roadmap), which will have some impact. Additionally, we need to consider that Ethereum now has a so-called consensus layer and execution layer. Its standards are actually a combination of these two different layers.
However, L2 does not have these two layers; it only has Ethereum's execution layer as our base layer and its own execution layer. This results in a slightly different network structure. It's hard to say that the comprehensive programming experience of L2 can become completely identical to Ethereum's in the future; this is almost impossible.
Impact of the Cancun Upgrade on L2 Competitive Landscape: Pros and Cons
0xRJ.eth: Speaking of the Blob—this is a very core underlying change in the upcoming Cancun upgrade. We can specifically discuss this because I think the Cancun upgrade can be said to be the most significant and noteworthy upgrade for Ethereum right now, and this upgrade undoubtedly benefits all Layer 2s and the entire L2 track. What impact do you think the Cancun upgrade will have on the current landscape of Layer 2, and more specifically, what impact will the Blob design you mentioned earlier have on the entire L2?
Daniel: The core concept of Rollup is that you need to throw the data of your L2 transactions onto L1, which is known as Data Availability (DA). This means that through this data, you can reconstruct the world state of L2 at any given point in time. Therefore, DA is extremely important for Rollup. However, currently, Ethereum's data is quite expensive, which means that although Rollup computations are not on Ethereum, the high cost of data makes it difficult to reduce L2 costs.
Since Ethereum aims to use Rollup as a focal point for scaling, it needs to solve the problem of expensive data. The Blob essentially transforms data into a different data market, where its pricing and the pricing of Ethereum's op codes become two independent parts. If you have enough hard drive space and are willing to store data, Ethereum's data will gradually become cheaper. This is a core concept. Therefore, this is beneficial for the entire Rollup ecosystem; any Rollup that can be adapted to not care about the specific content of the data can become cheaper.
However, some Rollups cannot do this. For example, Loopring's Rollup was designed to check the content of the data when it goes on-chain at L1. So Rollups like this cannot be adapted to use Blob data. In designing Taiko, I assumed that the data would not be checked for content; I only need to know its hash or commitment.
This raises a question for other L2 chains that are already live on mainnet or testnet. When they were designed, did they assume that L1 contracts would check the content of the data? If they have this assumption, then (after the Blob is deployed) they will face challenges and need to redesign their protocols and rewrite code.
We included this assumption in our design, so for us, the Blob is very important and friendly to the protocol. I believe that general L2s in the future will need to be compatible with Blob data; otherwise, they won't be able to be widely adopted. Currently, since the Blob EIP-4844 hasn't been launched and hasn't gone through a period of operation, we don't know how cheap Ethereum L1 data can become or whether this design will allow it to compete with other chains dedicated to data availability and even surpass them.
Currently, there are many chains that say, "I won't do anything; I'll just focus on data availability." From the perspective of a hardware provider, if I have a lot of hard drives, I would be willing to use them for data availability. So, which should be cheaper: using Ethereum L1 or using other chains? If Ethereum can offer a cheaper solution, then there's no need to use other (modular) chains for data availability. But currently, this is a question mark; we don't know. We also don't know how much cheaper Ethereum's data will be compared to its current costs, even without comparing it to other chains, as it's a separate market.
There are supply and demand relationships involved; the more users there are, the more expensive it becomes, and the fewer users there are, the cheaper it gets. So, while everyone is optimistic, they are also cautious about whether it can be reduced by 10 to 50 times. However, to say it can be reduced by 1,000 times or that the data cost can be negligible seems overly optimistic.
I believe that the Blob may further evolve with EIPs to make it cheaper, but currently, there is no initial data. In fact, Ethereum is unlikely to declare that this is the best design, so we will wait and see until it goes live at the end of this year or early next year, and everyone uses it. It may take a considerable amount of time, like a year and a half, to know how cheap this really is. Overall, this is a huge benefit; it's a feature that we L2s have long awaited. Without it (Proto-danksharding), we at Taiko wouldn't want to launch because it's too expensive.
Opportunities for Taiko to Overtake
0xRJ.eth: This explanation also partly clarifies why Taiko is expected to launch in the first half of next year. After all, the market, especially this year, has been saying that the L2 track is about to explode, and most L2 projects are rushing to launch their mainnets in Q3 and Q4 this year.
Daniel: After EIP-4844 goes live, one advantage is that it will allow you to support EIP-4844 more quickly. Because once you launch the mainnet, it will take longer to make changes while the vehicle is still moving (to support EIP-4844). If we prepare well initially and can test everything on the testnet before going live, Taiko might become the first project on the mainnet to implement EIP-4844. We aim to be that project.
However, timing isn't the most important factor. Providing developers with a very good overall programming experience is more important. There's no need to rush to be first or fastest.
0xRJ.eth: It mainly depends on whether the "first mover" advantage is sustainable. If there are larger trends that could potentially reshape the entire track, then I think this presents a very good opportunity for projects that haven't launched their mainnets yet to overtake.
Daniel: Exactly. The greater the changes, the more opportunities there will be. If Ethereum itself becomes fully mature, it actually reduces opportunities for new projects. Changes in Ethereum create more opportunities for new projects to seize.
Reflecting on these years of experiences, judgments, choices, and thoughts, what do you think is the best decision you've made? And what is your biggest regret?
0xRJ.eth: Looking back on your experiences, judgments, choices, and thoughts over the years, what do you think is the best decision you've made? And what is your biggest regret?
Daniel: I consider myself a long-term believer in blockchain; I don't follow the crowd to do many short-term things, so I tend to think more and try to avoid the influence of noise, which is very important. Everyone's energy is limited, and in the world of blockchain, there are all sorts of stories and interesting things. Pursuing short-term profit maximization or engaging in interesting operational projects can lead to a completely different type of person in the blockchain space.
I am still driven by technology and want to accomplish something, so being able to focus and stay away from temptations is very important. For example, I basically don't play DeFi and never use leverage. If I get caught up in that, it would be very difficult to extricate myself and maintain a long-term commitment. So I think that's one of the things I've done right.
As for the things I haven't done well, I may have taken some detours. I don't think there are many obvious mistakes, and I consider myself fortunate in making some choices. For example, I have a positive outlook on Ethereum, and many people doubted my persistence a few years ago, saying Ethereum was slow and that another chain was better. I would work hard to persuade them why Ethereum is good and why I am more optimistic about it, so luck also plays a role to some extent.
0xRJ.eth: If you had the chance to go back to that time, would you make some adjustments or different choices?
Daniel: I haven't always been a blockchain believer without pause; I took a break for a year around 2016-2017 when the market was very poor, and I returned to Web2. So to say that someone is a blockchain believer who has never doubted is not entirely true because people have to yield to reality. When you have pressures in real life and haven't found opportunities in the blockchain space, you will still return to a more practical life to make a living. So I am also a relatively realistic person. Of course, if I had persisted at that time, it might have led to a different outcome, but in life, you can't always compare with the optimal path because there are no choices; you naturally arrive at this point and just strive to do well at every step without regrets.
0xRJ.eth: I actually don't think that (the period of leaving Web3) is a "suboptimal" choice. I think switching between both sides for various practical reasons can actually make you clearer about what you want and help you think better.
Daniel: Yes, from this perspective, I also tell my team members that they have practical needs. I can't sell them a perfect ideal and tell them to be idealists. We are always trying to meet reality while striving to do things that we believe will contribute to society or the industry in the future. Even if everyone only does a little, I think that's already good enough. If we completely sell ideals, it wouldn't be much different from being a fraud.
Moreover, many times, these ideals cannot be sold; they can only be indirectly influenced and subtly affect others.
I particularly hope that our team members will have more persistence in their ideals and be able to do things like we are now fully open-source, never having any proprietary code. This is our team's positioning, and I feel quite proud that everyone is willing to contribute in this way rather than applying for patents or seeking copyright protection. This is something I think Taiko is doing quite well.
0xRJ.eth: Thank you very much for your time today and for these valuable insights. I look forward to discussing more detailed content related to the track next time.
Daniel: You're welcome! Thank you, RJ!