LazAI Research: How AI Economy Surpasses the DeFi TVL Myth

Recommended Reading
2025-05-13 20:52:00
Collection
If the correct economic infrastructure is established (that is, generating high-quality data, reasonably incentivizing its creation and use, and being individual-centered), AI will thrive. We will also analyze platforms like LazAI as examples of how they are pioneering the construction of these AI-native frameworks, leading a new paradigm for pricing and rewarding data, and driving the next leap in AI innovation.

Introduction

Decentralized Finance (DeFi) has ignited a story of exponential growth through a series of simple yet powerful economic primitives, transforming blockchain networks into a global permissionless market and fundamentally disrupting traditional finance. In the rise of DeFi, several key metrics have become the universal language of value: Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and Liquidity. These concise metrics inspire participation and trust. For instance, in 2020, the TVL of DeFi (the dollar value of assets locked in protocols) skyrocketed 14 times, and then quadrupled again in 2021, peaking at over $112 billion. High yields (with some platforms claiming APYs of up to 3000% during the liquidity mining frenzy) attracted liquidity, while the depth of liquidity pools signified lower slippage and more efficient markets. In short, TVL tells us "how much capital is involved," APR tells us "how much profit can be earned," and liquidity indicates "the ease of trading assets." Despite their flaws, these metrics have built a financial ecosystem worth billions from scratch. By converting user participation into direct financial opportunities, DeFi has created a self-reinforcing adoption flywheel, leading to rapid proliferation and driving mass participation.

Today, AI is at a similar crossroads. However, unlike DeFi, the current narrative of AI is dominated by large general models trained on massive internet datasets. These models often struggle to deliver effective results in niche areas, specialized tasks, or personalized needs. Their "one-size-fits-all" approach, while powerful, is fragile; it is general yet misaligned. This paradigm needs to shift. The next era of AI should not be defined by the scale or generality of models, but should focus on bottom-up—smaller, highly specialized models. Such customized AI requires a new kind of data: high-quality, human-aligned, and domain-specific data. However, acquiring such data is not as simple as web scraping; it requires active and conscious contributions from individuals, domain experts, and communities.

To propel this new era of specialized, human-aligned AI, we need to build an incentive flywheel similar to what DeFi designed for finance. This means introducing new AI-native primitives to measure data quality, model performance, agent reliability, and alignment incentives—these metrics should directly reflect the true value of data as an asset (rather than merely as input).

This article will explore these new primitives that can form the pillars of an AI-native economy. We will elaborate on how, if the correct economic infrastructure is established (i.e., generating high-quality data, reasonably incentivizing its creation and use, and being individual-centered), AI can thrive. We will also analyze platforms like LazAI and how they are pioneering the construction of these AI-native frameworks, leading to a new paradigm of pricing and rewarding data, thereby fueling the next leap in AI innovation.

The Incentive Flywheel of DeFi: TVL, Yields, and Liquidity—A Quick Review

The rise of DeFi was not accidental; its design made participation both profitable and transparent. Key metrics such as Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and Liquidity are not just numbers but primitives that align user behavior with network growth. Together, these metrics create a virtuous cycle that attracts users and capital, thereby driving further innovation.

  • Total Value Locked (TVL): TVL measures the total capital deposited in DeFi protocols (such as lending pools and liquidity pools) and has become synonymous with the "market cap" of DeFi projects. Rapid growth in TVL is seen as a sign of user trust and protocol health. For example, during the DeFi boom from 2020 to 2021, TVL surged from under $10 billion to over $100 billion, and by 2023, it surpassed $150 billion, demonstrating the scale of value participants are willing to lock into decentralized applications. High TVL creates a gravitational effect: more capital means higher liquidity and stability, attracting more users seeking opportunities. Although critics point out that blindly chasing TVL may lead protocols to offer unsustainable incentives (essentially "buying" TVL), masking inefficiencies, without TVL, the early DeFi narrative would lack a concrete way to track adoption.
  • Annual Percentage Yield (APY/APR): Yield promises transform participation into tangible opportunities. DeFi protocols began offering astonishing APRs to liquidity or capital providers. For instance, Compound launched the COMP token in mid-2020, pioneering the liquidity mining model—rewarding liquidity providers with governance tokens. This innovation sparked a frenzy of activity. Using the platform became not just a service but an investment. High APYs attracted yield seekers, further driving up TVL. This reward mechanism incentivized early adopters with substantial returns, propelling network growth.
  • Liquidity: In finance, liquidity refers to the ability to transfer assets without causing significant price fluctuations—this is the cornerstone of a healthy market. In DeFi, liquidity is often initiated through liquidity mining programs (where users earn tokens for providing liquidity). The deep liquidity of decentralized exchanges and lending pools means users can trade or borrow with low friction, enhancing user experience. High liquidity leads to higher trading volumes and utility, attracting more liquidity—a classic positive feedback loop. It also supports composability: developers can build new products (derivatives, aggregators, etc.) on top of liquid markets, driving innovation. Thus, liquidity becomes the lifeblood of the network, fueling the emergence of adoption and new services.

These primitives together form a powerful incentive flywheel. Participants who create value by locking assets or providing liquidity are immediately rewarded (through high yields and token incentives), encouraging more participation. This transforms individual participation into widespread opportunities—users earn profits and governance influence—and these opportunities, in turn, generate network effects, attracting thousands of users. The results are remarkable: by 2024, the number of DeFi users exceeded 10 million, with its value growing nearly 30 times in just a few years. Clearly, large-scale incentive alignment—turning users into stakeholders—is key to DeFi's exponential rise.

The Current AI Economy's Deficiency

If DeFi demonstrated how bottom-up participation and incentive alignment can ignite a financial revolution, the current AI economy still lacks foundational primitives to support a similar transformation. Today's AI is dominated by large general models trained on massive scraped datasets. These foundational models are impressive in scale but are designed to solve all problems, often failing to serve anyone particularly well. Their "one-size-fits-all" architecture struggles to adapt to niche areas, cultural differences, or individual preferences, leading to fragile outputs, blind spots, and an increasing disconnect from real-world needs.

The definition of the next generation of AI will no longer be just about scale, but also about contextual understanding—the ability of models to understand and serve specific domains, specialized communities, and diverse human perspectives. However, this contextual intelligence requires different inputs: high-quality, human-aligned data. And this is precisely what is currently lacking. There is no widely recognized mechanism to measure, identify, value, or prioritize such data, nor is there an open process for individuals, communities, or domain experts to contribute their perspectives and improve the intelligent systems that increasingly affect their lives. As a result, value remains concentrated in the hands of a few infrastructure providers, while the upward potential of the masses in the AI economy is disconnected. Only by designing new primitives that can discover, validate, and reward high-value contributions (data, feedback, alignment signals) can we unlock the participatory growth cycle that DeFi relies on for its prosperity.

In short, we must also ask:

How should we measure the value created? How can we build a self-reinforcing adoption flywheel to drive bottom-up, individual-centered data participation?

To unlock a DeFi-like "AI-native economy," we need to define new primitives that transform participation into opportunities for AI, catalyzing network effects that have yet to be seen in this field.

AI Native Tech Stack: New Primitives for a New Economy

We are no longer just transferring tokens between wallets; we are inputting data into models, transforming model outputs into decisions, and deploying AI agents into action. This requires new metrics and primitives to quantify intelligence and alignment, just as DeFi metrics quantify capital. For example, LazAI is building the next generation of blockchain networks that address the AI data alignment problem by introducing new asset standards for AI data, model behavior, and agent interactions.

The following outlines several key primitives that define the on-chain AI economy's value:

  • Verifiable Data (the new "Liquidity"): Data for AI is like liquidity for DeFi—the lifeblood of the system. In AI (especially large models), having the right data is crucial. However, raw data may be of poor quality or misleading, and we need high-quality data that is verifiable on-chain. A possible primitive here is "Proof of Data (PoD)/Proof of Data Value (PoDV)". This concept will measure the value of data contributions not only based on quantity but also on quality and its impact on AI performance. It can be seen as the counterpart to liquidity mining: contributors providing useful data (or labels/feedback) will be rewarded based on the value their data brings. Early designs of such systems have already emerged. For example, a certain blockchain project's Proof of Data (PoD) consensus treats data as a primary resource for validation (similar to energy in proof of work or capital in proof of stake). In this system, nodes are rewarded based on the quantity, quality, and relevance of the data they contribute.

When generalized to the AI economy, we might see "Total Locked Data Value (TDVL)" as a metric: an aggregate measure of all valuable data in the network, weighted by verifiability and usefulness. Verified data pools could even be traded like liquidity pools—for example, a verified medical imaging pool for on-chain diagnostic AI could have quantifiable value and utility. Data provenance (understanding the source and modification history of data) will be a key part of this metric, ensuring that the data input into AI models is trustworthy and traceable. Essentially, if liquidity is about available capital, verifiable data is about available knowledge. Metrics like Proof of Data Value (PoDV) can capture the amount of useful knowledge locked in the network, while on-chain data anchoring achieved through LazAI's Data Anchoring Token (DAT) makes data liquidity a measurable and incentivized economic layer.

  • Model Performance (a new asset class): In the AI economy, trained models (or AI services) themselves become assets—potentially a new asset class alongside tokens and NFTs. Well-trained AI models have value due to the intelligence encapsulated in their weights. But how do we represent and measure this value on-chain? We may need on-chain performance benchmarks or model certifications. For instance, accuracy on standard datasets or win rates in competitive tasks could serve as performance scores recorded on-chain. This could be seen as an on-chain "credit rating" or KPI for AI models. Such scores could be adjusted as models are fine-tuned or data is updated. Projects like Oraichain have explored combining AI model APIs with reliability scores (validating whether AI outputs meet expectations through test cases) on-chain. In AI-native DeFi ("AiFi"), we could envision staking based on model performance—for example, if developers believe their model performs excellently, they could stake tokens; if independent on-chain audits confirm their performance, they are rewarded (if the model performs poorly, they lose their stake). This would incentivize developers to report honestly and continuously improve their models. Another idea is tokenized model NFTs with performance metadata—the "floor price" of a model NFT could reflect its utility. Such practices are already emerging: some AI marketplaces allow the buying and selling of model access tokens, and protocols like LayerAI (formerly CryptoGPT) explicitly view data and AI models as emerging asset classes in the global AI economy. In short, while DeFi asks, "How much capital is locked?", AI-DeFi will ask "How much intelligence is locked?"—not just in terms of computational power (though equally important), but also in terms of the effectiveness and value of models running in the network. New metrics might include "Model Quality Proof" or a temporal index of on-chain AI performance improvements.
  • Agent Behavior and Utility (on-chain AI agents): One of the most exciting and challenging new elements in AI-native blockchains is the autonomous AI agents running on-chain. They could be trading bots, data curators, customer service AIs, or complex DAO governors—essentially software entities capable of perceiving, deciding, and acting on behalf of users or even autonomously on the network. The DeFi world has only basic "bots"; in the AI blockchain world, agents could become first-class economic entities. This creates a demand for metrics around agent behavior, trustworthiness, and utility. We might see mechanisms like "Agent Utility Scores" or reputation systems. Imagine each AI agent (potentially represented as NFTs or semi-fungible tokens (SFTs)) accumulating reputation based on their actions (completing tasks, collaborating, etc.). Such ratings would be similar to credit scores or user ratings but tailored for AI. Other contracts could decide whether to trust or use agent services based on this. In LazAI's proposed iDAO (individual-centered DAO) concept, each agent or user entity has its own on-chain domain and AI assets. We can envision these iDAOs or agents establishing measurable records.

Existing platforms have begun to tokenize AI agents and provide on-chain metrics: for instance, Rivalz's "Rome protocol" creates NFT-based AI agents (rAgents), with their latest reputation metrics recorded on-chain. Users can stake or lend these agents, with rewards depending on the agents' performance and impact within the collective AI "swarm." This is essentially DeFi for AI agents, showcasing the importance of agent utility metrics. In the future, we might discuss "active AI agents" as we do "active addresses," or talk about "agent economic impact" as we do trading volume.

  • Attention Trajectories may become another primitive—recording what agents focus on during decision-making (which data, signals). This could make black-box agents more transparent and auditable, attributing their successes or failures to specific inputs. In summary, agent behavior metrics will ensure accountability and alignment: if autonomous agents are to manage large sums of money or critical tasks, their reliability must be quantified. High agent utility scores could become a prerequisite for on-chain AI agents managing substantial funds (similar to how high credit scores are thresholds for large loans in traditional finance).
  • Usage Incentives and AI Alignment Metrics: Finally, the AI economy needs to consider how to incentivize beneficial usage and alignment. DeFi incentivizes growth through liquidity mining, early user airdrops, or fee rebates; in AI, mere usage growth is insufficient; we need to incentivize usage that improves AI outcomes. At this point, metrics tied to AI alignment become crucial. For example, human feedback loops (like user ratings of AI responses or corrections provided through iDAOs, which will be detailed later) could be recorded, with feedback contributors earning "alignment rewards". Or envision "Attention Proof" or "Participation Proof", where users who invest time in improving AI (by providing preference data, corrections, or new use cases) are rewarded. Metrics could include attention trajectories, capturing high-quality feedback or human attention invested in optimizing AI.

Just as DeFi requires block explorers and dashboards (like DeFi Pulse, DefiLlama) to track TVL and yields, the AI economy also needs new browsers to track these AI-centric metrics—imagine a dashboard called "AI-llama" displaying total alignment data volume, active AI agents, cumulative AI utility rewards, etc. It would have similarities to DeFi but with entirely new content.

Towards a DeFi-style AI Flywheel

We need to build an incentive flywheel for AI—viewing data as a first-class economic asset—transforming AI development from a closed endeavor into an open, participatory economy, just as DeFi turned finance into a user-driven liquidity open field.

Early explorations in this direction are already emerging. For instance, projects like Vana are beginning to reward users for participating in data sharing. The Vana network allows users to contribute personal or community data to a DataDAO (decentralized data pool) and earn dataset-specific tokens (which can be exchanged for the network's native tokens). This is an important step towards monetizing data contributors.

However, merely rewarding contribution behavior is insufficient to replicate the explosive flywheel of DeFi. In DeFi, liquidity providers are rewarded not only for depositing assets but also because the assets they provide have transparent market value, and the yields reflect actual usage (transaction fees, borrowing interest plus incentive tokens). Similarly, the AI data economy needs to go beyond generic rewards and directly price data. Without economic pricing based on data quality, scarcity, or the degree of improvement to models, we may fall into shallow incentives. Simply distributing token rewards for participation may encourage quantity over quality or stagnate when tokens lack actual AI utility linkage. To truly unleash innovation, contributors need to see clear market-driven signals, understand the value of their data, and be rewarded when their data is actually used in AI systems.

We need an infrastructure that focuses more on directly valuing and rewarding data to create a data-centered incentive loop: the more high-quality data people contribute, the better the models become, attracting more usage and data demand, thereby increasing contributor rewards. This will shift AI from a closed competition for big data to an open market for trustworthy, high-quality data.

How do these ideas manifest in real projects? Taking LazAI as an example—this project is building the next generation of blockchain networks and foundational primitives for a decentralized AI economy.

Introduction to LazAI—Aligning AI with Humanity

LazAI is a next-generation blockchain network and protocol designed to solve the AI data alignment problem by introducing new asset standards for AI data, model behavior, and agent interactions, thereby constructing the infrastructure for a decentralized AI economy.

LazAI offers one of the most forward-thinking approaches by making data verifiable, incentivized, and programmable on-chain to address the AI alignment issue. The following will illustrate how the LazAI framework puts these principles into practice in an AI-native blockchain.

Core Issue—Data Misalignment and Lack of Fair Incentives

AI alignment often boils down to the quality of training data, while the future requires new data that is aligned with humans, trustworthy, and governed. As the AI industry shifts from centralized general models to contextualized, aligned intelligence, the infrastructure must evolve in tandem. The next era of AI will be defined by alignment, precision, and provenance. LazAI directly tackles the data alignment and incentive challenges, proposing a fundamental solution: align data at the source and directly reward the data itself. In other words, ensuring that training data verifiably represents human perspectives, is denoised/bias-free, and rewards based on data quality, scarcity, or the degree of improvement to models. This marks a paradigm shift from patching models to organizing data.

LazAI not only introduces primitives but also proposes a new paradigm for data acquisition, pricing, and governance. Its core concepts include Data Anchoring Tokens (DAT) and individual-centered DAOs (iDAOs), which together achieve pricing, provenance, and programmable use of data.

Verifiable and Programmable Data—Data Anchoring Tokens (DAT)

To achieve this goal, LazAI introduces a new on-chain primitive—Data Anchoring Tokens (DAT), a new token standard designed for the assetization of AI data. Each DAT represents a piece of on-chain anchored data and its lineage information: contributor identity, evolution over time, and use cases. This creates a verifiable history for each piece of data—similar to a version control system for datasets (like Git), but secured by blockchain. Since DATs exist on-chain, they possess programmability: smart contracts can manage their usage rules. For example, data contributors can specify that their DAT (like a set of medical images) is only accessible to specific AI models or under certain conditions (enforced through code to implement privacy or ethical constraints). The incentive mechanism is reflected in the fact that DATs can be traded or staked—if the data is valuable to the model, the model (or its owner) may pay for access to the DAT. Essentially, LazAI builds a market for data that is tokenized and traceable. This directly echoes the earlier discussion of the "verifiable data" metric: by checking DATs, one can confirm whether they have been verified, how many models have used them, and what performance improvements they have brought to the models. Such data will receive higher valuations. By anchoring data on-chain and linking economic incentives to quality, LazAI ensures that AI is trained on trustworthy and measurable data. This addresses the issue through incentive alignment—high-quality data is rewarded and stands out.

Individual-Centered DAO (iDAO) Framework

The second key component is LazAI's iDAO (individual-centered DAO) concept, which redefines governance in the AI economy by placing individuals (rather than organizations) at the core of decision-making and data ownership. Traditional DAOs often prioritize collective organizational goals, inadvertently undermining individual agency. iDAOs disrupt this logic. They are personalized governance units that allow individuals, communities, or domain-specific entities to directly own, control, and verify the data and models they contribute to AI systems. iDAOs support customized, aligned AI: as a governance framework, they ensure that models consistently adhere to the values or intentions of contributors. From an economic perspective, iDAOs also imbue AI behavior with programmability from the community—rules can be set to restrict how models use specific data, who can access the models, and how the outputs of the models are distributed. For example, an iDAO could stipulate that whenever its AI model is called (such as through API requests or task completions), a portion of the revenue will be returned to the DAT holders who contributed relevant data. This establishes a direct feedback loop between agent behavior and contributor rewards—similar to the mechanism in DeFi where liquidity provider rewards are tied to platform usage. Furthermore, iDAOs can interact with each other through protocols to achieve composable interactions: one AI agent (iDAO) could call another iDAO's data or models under negotiated terms.

By establishing these primitives, LazAI's framework turns the vision of a decentralized AI economy into reality. Data becomes an asset that users can own and profit from, and models shift from private silos to collaborative projects, allowing every participant—from individuals curating unique datasets to developers building small specialized models—to become stakeholders in the AI value chain. This incentive alignment is expected to replicate the explosive growth of DeFi: when people realize that participating in AI (contributing data or expertise) directly translates into opportunities, they will engage more actively. As participation increases, network effects will kick in—more data leads to better models, attracting more users, which in turn generates more data and demand, creating a positive feedback loop.

Building a Trust Foundation for AI: The Verified Computing Framework

In this ecosystem, LazAI's Verified Computing Framework is the core layer for building trust. This framework ensures that every generated DAT, every iDAO (individualized autonomous organization) decision, and every incentive distribution has a verifiable traceable chain, making data ownership executable, governance processes accountable, and agent behavior auditable. By transforming iDAOs and DATs from theoretical concepts into reliable, verifiable systems, the Verified Computing Framework achieves a paradigm shift in trust—from reliance on assumptions to certainty based on mathematical verification.

Realizing the Value of a Decentralized AI Economy The establishment of these foundational elements makes the vision of a decentralized AI economy truly actionable:

  • Data Assetization: Users can assert ownership of data assets and earn returns.
  • Model Collaboration: AI models transition from closed silos to open collaborative products.
  • Participation Rights: From data contributors to vertical model developers, all participants can become stakeholders in the AI value chain.

This incentive-compatible design is expected to replicate the growth momentum of DeFi: when users realize that participating in AI development (by contributing data or expertise) can directly translate into economic opportunities, their enthusiasm for participation will be ignited. As the scale of participants expands, network effects will emerge—more high-quality data leads to better models, attracting more users, which in turn generates more data demand, creating a self-reinforcing growth flywheel.

Conclusion: Moving Towards an Open AI Economy

The journey of DeFi shows that the right primitives can unleash unprecedented growth. In the upcoming AI-native economy, we stand at a similar breakthrough threshold. By defining and implementing new primitives that emphasize data and alignment, we can transform AI development from a centralized engineering endeavor into a decentralized, community-driven enterprise. This journey is not without challenges: we must ensure that economic mechanisms prioritize quality over quantity and avoid ethical pitfalls to prevent data incentives from compromising privacy or fairness. But the direction is clear. Practices like LazAI's DAT and iDAO are paving the way to translate the abstract idea of "AI aligned with humanity" into concrete mechanisms of ownership and governance.

Just as early DeFi experimented with optimizing TVL, liquidity mining, and governance, the AI economy will iterate its new primitives. In the future, debates and innovations around data value measurement, fair reward distribution, AI agent alignment, and benefits will undoubtedly emerge. This article only scratches the surface of the incentive models that could drive AI democratization, hoping to spark open discussions and deeper research: How can we design more AI-native economic primitives? What unexpected consequences or opportunities might arise? Through broad community participation, we are more likely to build an AI future that is not only technologically advanced but also economically inclusive and aligned with human values.

The exponential growth of DeFi is no magic—it is driven by incentive alignment. Today, we have the opportunity to drive an AI renaissance through similar practices with data and models. By transforming participation into opportunities and opportunities into network effects, we can initiate a flywheel that reshapes value creation and distribution in the digital age.

Let us build this future together—starting with a verifiable dataset, an aligned AI agent, and a new primitive.

Related tags
ChainCatcher reminds readers to view blockchain rationally, enhance risk awareness, and be cautious of various virtual token issuances and speculations. All content on this site is solely market information or related party opinions, and does not constitute any form of investment advice. If you find sensitive information in the content, please click "Report", and we will handle it promptly.
ChainCatcher Building the Web3 world with innovators