a16z: Traps, Solutions, and Future Perspectives in Token Design
Source link: TokenDesign: Mental Models, Capabilities, and Emerging Design Spaces with Eddy Lazzarin
Video Author: Eddy Lazzarin, a16z Crypto
Compilation: Qianwen, ChainCatcher
Eddy Lazzarin is the engineering lead for the cryptocurrency team. This video covers a variety of topics, where he discusses common pitfalls that many encounter when considering token design, as well as potential solutions. He believes that token design is a truly early-stage field, and it should actually be referred to as protocol design, because tokens are not the end goal; what you are really aiming for is the protocol.
Tokens are a very interesting, useful, and powerful new type of tool that changes the way protocols are designed and the outcomes that can be achieved, but tokens should not be the core object of design. Current protocol design resembles "alchemy" more than a science, as designers' understanding is far from comprehensive or scientific, and most projects still require a lot of experimentation.
The content is divided into three parts: first, common mental models in token design; second, token classification, discussing more specifically what tokens are and how we think about developing and enhancing their capabilities; and finally, the technology tree theory, which explores how to leverage technology to make our designs more likely to succeed.
1. Mental Models
First, tokens serve the protocol. They are merely tools, just a part of the design process, and they should not be the goal. If you want to create something decentralized, then tokens may be part of that, as they can effectively allow people to have ownership of the protocol and help maintain alignment among participants.
Three Stages of Design
From my interactions with portfolio companies, I have summarized three stages of successful design processes.
First Stage: Define Goals. A goal is a concise description of an effective protocol outcome; it should be clear and unambiguous, meaning it has been achieved through specific design. Therefore, we should have a very clear distinction between success and failure. If we are unclear about what our goals are, we need to start over and forget about tokens for a moment. Ideally, the goal should be measurable, even if we are not yet sure how to measure success.
Second Stage: Introduce Constraints. Generally, there are two types of constraints: endogenous constraints and exogenous constraints. Endogenous constraints are those we choose to simplify the design process, as trade-offs are necessary, or they are trade-offs in themselves. For example, we might choose to limit certain interesting features we like. I once watched a talk by the Subset game team, who designed several very cool games, such as "Into the Breach" and "Superluminal," and they mentioned designing under constraints. I recommend this talk to anyone designing protocols. When they choose constraints, they only consider what is interesting in the game. Endogenous constraints can come from many aspects but are usually determined by the designers themselves. Exogenous constraints are imposed by nature, technological conditions, regulations, and various other factors. I will elaborate on this later.
Third Stage: Design Mechanisms. Once we have constraints and goals, we can think clearly about the mechanisms that can meet those goals. Now, whenever we consider a mechanism, we should be clear about whether it violates these constraints and whether it brings us closer to the goal. A protocol will consist of a series of mechanisms, all pushing towards a specific goal based on some constraints.
Take MakerDAO as an example. Their goal is to develop a stable Ethereum-native asset, which of course has multiple interpretations of what "stable" and "native" mean. Their constraints include being pegged to the dollar and being fully backed by native on-chain assets, among others.
Common Pitfalls
(1) Overemphasizing Tokens. I have touched on this a bit, but if you are always thinking about rewards or token distribution instead of how to maintain alignment among participants in your system, you may not be considering the protocol; you are thinking about tokens. Tokens are not the protocol, and they should not be your goal. They should just be a tool.
How to Avoid This Trap? Ask yourself: How does this system operate without tokens? If the system completely fails without tokens, then you may be overemphasizing the role of tokens. If several key parts of the system fail, then the situation is somewhat better; your tokens are indeed important and necessary for overall balance, but the system remains coherent and complete without them. Therefore, you should return to thinking about the goals of the system.
(2) Unbounded Design Space. In design, you have too many ideas and possibilities, and you may not even know where to start because there are so many things to do. This often arises from unclear goals, so you need to refine your goals. It may also be due to a lack of understanding of the constraints imposed by the outside world, or you have not yet accepted these constraints.
If you bring these constraints in, you will find that the design space shrinks and becomes clearer. Two questions are very helpful for constraining the design space: What is the powerful concept you want to build? It could be some deep ideas, some advantages, or shifts in trends. Ask yourself what this powerful concept is and how you can maximize it, focusing on it rather than considering the entire system first. The other question is: What is the biggest weakness of this design? What keeps you up at night? It could be points you think may not work, concerns you have, critical weaknesses, and what constraints you can accept to improve it. This can greatly limit the design space.
(3) Always Relying on the Community. When faced with challenges in designing certain parts of the system, pushing everything onto the community to solve or expecting unseen forces to fill the gaps is always risky. While permissionless systems are popular and have led to many amazing innovations, you cannot predict community actions, nor should you expect them to solve the most obvious problems in your system.
There are several key questions you should ask yourself: What are our true expectations of the community, and what are we giving them? It’s not about whether we are giving them enough tokens; rather, it’s about what powers we are giving them. What capabilities do they have? What ownership do they possess? Have they been granted enough power to balance this responsibility?
If you truly expect them to fix something, or if you expect other ambitious individuals to add interesting features or fix components of the system, then you should first ask yourself: Are you going to build here? If you are not, because it lacks enough upside, enough power, or enough flexibility, then you shouldn’t expect others to do so either.
2. Token Classification
This is not a complete list; I have been discussing this with team members, and I believe we will revise it soon. But this is just to enumerate all the capabilities we have seen tokens exhibit so far.
Tokens are a tool within protocols; they are a tool and a protocol, and more abstractly, they are a data structure. So how do we see this data structure being used in different protocols? They can be broadly classified into five categories: payment, voting, incentives, metadata, and ownership (claiming). I believe that over time, each category will have more solutions, and this grouping feels relatively intuitive to me.
Payment
The payment function can be divided into three categories. First, as an internal currency for the community or project. We have not seen too many cases like this, but there are some examples. For instance, SourceCred is an interesting example, and FWB may be heading in this direction. It differs from traditional payment methods like dollar payments because it exists within a specific community that has control over the currency. They can use monetary policy and other means for this internal currency, such as ensuring that this currency is stable and pegged to the value of certain specific assets, perhaps minting or burning it based on specific, community-wide goals.
Second, the most commonly used and easiest to understand use of cryptocurrency payments is as a network resource, with Ethereum and Bitcoin falling into this category. You pay for computing power, storage, or other resources within cryptocurrency networks. We have EIP1559, staking, liquidity, etc., to determine how tokens are used to calculate different resources within the system, especially computational resources.
The third type of payment token exists similarly to game currency. For example, games, resources, or some protocol resources need to be stable and priced because if you are using the system, and these resources are stable, then the token price also needs to be relatively stable. Whether it supplies stability is not important because you use it just to achieve specific parts of the application.
So where do stablecoins fit in? Of course, a stablecoin can serve as a payment method in any of the three ways mentioned above. But what makes a stablecoin a stablecoin is the mechanism behind it that stabilizes it, so stablecoins generally belong to the ownership category.
Ownership
There are generally two types of ownership: on-chain (deposits) and off-chain (ownership). Deposit tokens represent ownership of other tokens; an example is Uniswap LP tokens, which are ERC20 in V2 and NFTs in V3. The stablecoin Dai, which comes from the Maker protocol, is also an on-chain deposit because you or the vault holder use it to claim their underlying collateral. So deposit tokens refer to those that can be used to claim other tokens in a non-chain environment.
The second type of token represents ownership of some off-chain assets, so this could be like real-world asset tokens, real estate tokens, or similar things. We have not seen a lot of such examples. A more modern example is what is now referred to as redeemable items, where tokens can be exchanged for physical items. For instance, exchanging an NFT for artwork, where the NFT represents ownership of the yard. If you want, there are even some interesting trades. You can control an NFT with a physical item, using some digital functions like chips to control the ownership of subsequent NFTs.
Voting
Voting can be used to fund projects, allocate resources, serve as group payments or transfers, and can also facilitate software upgrades. It can also be a measure of social consensus, such as choosing a leader to decide the future plans of a project.
Staking
Tokens can be designed to earn rewards through smart contracts; there is no legal agreement here, but the operation of the mechanism means that tokens will benefit from some on-chain activity. An example is Maker; if Maker operates well, and the many token holders of Maker do their jobs, the system runs smoothly, then they will benefit from some rewards, which is the way of smart contracts and the design of the protocol to reward good community management.
You can also make tokens the result of a legal agreement that entitles you to rewards. You can create a token that represents a portion of equity or shares in a company, of course, with various legal requirements and restrictions. There was a time when some theorized that it was possible to create security tokens, although we have not yet seen substantial more cases.
Tokens are also used to underwrite risks in exchange for rewards. Maker uses this principle; if there are losses in the Maker protocol, more Maker tokens will be generated, which dilutes the value held by Maker holders. By holding Maker tokens, holders bear some risk, which is also part of what drives Maker holders to promote community building. If they want to see their investments appreciate, they need to support the development of this system.
Metadata
First, tokens represent membership, determining whether you can access a specific space, whether you are in a particular community, or whether you belong to certain groups. Protocols or tools written by third parties can leverage this membership attribute in any way, which is permissionless. For example, some NFT communities may decide that only token holders can join, providing specific functionalities for those who hold this token. Membership is an interesting type of metadata provided by tokens.
Tokens also represent reputation. Some people discuss whether reputation should be transferable; I personally think it may not be. But it can be homogeneous in some cases and heterogeneous in others. If it refers to your achievements, it may be heterogeneous; if it refers to information sources, credit, or different types of credit scoring systems, it may be homogeneous. This is a continuous data type, so it is a form of metadata.
Tokens also represent identity or reference. ENS is an example in this regard; ENS names can point to addresses and can be updated, which is different from the DNS system.
Off-chain data can be a form of metadata. An example is off-chain KYC or some verifiable certificates. Another good example is diplomas or academic qualifications. Someone hands you this certificate, which is publicly visible, traceable, and authentic. We have not seen too many cases representing permissions and capabilities on-chain. For instance, some entities explicitly grant you permissions, such as the ability to call a function, change a piece of code, or transfer something on-chain. You can even use tokens as interfaces; we have seen examples where not only SVG data can be placed in the token URI, but you can also embed an entire HTML page, and even a bit of JavaScript. You can place an interface in an NFT, control the interface, and embed the interface into objects that people own and transfer.
An interesting example is BEEP3R, where you first mint an NFT with text, and then you can broadcast that text to other BEEP3R holders by owning it. This text is displayed on the small image of BEEP3R. When you have a BEEP3R machine, you can also directly send messages to other BEEP3R machine holders, just like using xmtb separately.
So what is the function of this token? It is a membership token; with this token, you can receive messages. Any wallet interface that can correctly represent animated URLs and supports this standard can display any messages you receive.
This is also an identity token because as a BP holder, you can receive and send messages. So this only happens within that collection. It is also an identity token because they send you messages using your BP token ID. At the same time, it also exists as an interface, allowing you to view information related to that NFT.
3. Technology Tree Theory
We can see that some areas have already developed significantly, such as tokens as payment methods and network resources, while other areas have yet to mature, such as interfaces and metadata. So why is this the case? I do not have a complete answer, but I think it may relate to the technology tree, which is far from complete.
My question is, why do some products appear at certain times, and why do some products take longer to emerge than others? Take lending protocols as an example; it is hard to imagine lending protocols functioning without stablecoins. This is because when you lend debt in a lending protocol, you want to represent it with stable assets, as you can predict the price of that asset, so we need stablecoins before we can truly have lending protocols.
Similarly, we also need AMMs for lending protocols because if you want to use a lending protocol for leverage, especially in early simple lending protocols, you need to be able to borrow assets, such as stablecoins. If you want to quickly exchange that stablecoin for that asset and want more risk exposure, then you need an AMM. It wasn't until we had properly functioning AMMs and stablecoins that the development of lending protocols occurred.
But how do we get properly functioning AMMs and stablecoins? It is difficult to achieve this without an interoperable token standard because stablecoins, AMMs, and all the systems around them need to understand how other projects interface with them. To have ERC20 tokens, you need fully programmable smart contracts. You may not actually need them, but that is how they first appeared on Ethereum, as Ethereum was launched without the ERC20 token standard. We need full programmability to leave enough open design space, and this can certainly be discussed further. But in summary, I believe there is a technology tree where certain technologies are prerequisites for others.
Here are two questions: What are the key technologies that unlock future applications and protocols? What technologies do we need to develop useful reputation systems or decentralized and trustless interfaces? And the second question is somewhat like the first, but in reverse: What applications and protocols will be unlocked by upcoming technologies?
For example, account abstraction, EIP4844, vertical trees, zero-knowledge machine learning, etc. These questions are interesting because if we can foresee the arrival of specific technologies, and the technology can alleviate or introduce design constraints, then how will this change our designs? If specific technologies can alleviate constraints, should we invest energy in developing them?
If we view things as a technology tree, it may help us reason about what is coming next or what you need to achieve your desired set of constraints. Therefore, linking this back to my initial point about constraints, I believe new technologies alleviate the constraints we previously faced. For instance, without the ERC20 standard, the constraints on any AMM or stablecoin design would be that it either needs to introduce a standard or be able to cope with various different designs.
Imagine designing a universal AMM without using a specific token standard; this would be extremely, extremely difficult. I believe this would be an almost insurmountable constraint, but having interoperability standards means we can directly support ERC20 tokens, which constrains the design space and makes it feasible.
If we can anticipate what technologies will emerge in the future, what impact will that have on the constraints of our protocol design? If we have specific goals or specific constraints, what technologies do we need? Technologies that can alleviate these constraints and make these goals possible again through new mechanisms.