Trusta.AI: Bridging the Trust Gap in the Era of Human-Machine Interaction
TL;DR
- The structural turning point of the identity system has arrived, with AI Agents taking center stage.
The Web3 identity mechanism is transitioning from a singular "human real identity verification" to a new paradigm of "behavior-oriented + multi-agent collaboration." As AI Agents rapidly penetrate core on-chain scenarios, traditional static identity verification and declarative trust systems can no longer support complex interactions and risk prevention.
- Trusta.AI pioneers AI-native trust infrastructure.
Unlike existing solutions such as Worldcoin and Sign Protocol, Trusta.AI has built an integrated trust framework for AI Agents, covering identity declaration, behavior recognition, dynamic scoring, and permission control, achieving a closed-loop capability from "whether it is human" to "whether it is trustworthy" for the first time.
- The SIGMA multi-dimensional trust model reshapes on-chain reputation assets.
By quantifying reputation across five dimensions (professionalism, influence, engagement, monetization, and adoption), Trusta.AI transforms the abstract concept of "trust" into a composable and tradable on-chain asset, becoming the cornerstone of credit for AI social interactions.
- Technical closed-loop: TEE + DID + ML enables dynamic risk control.
Trusta.AI integrates Trusted Execution Environment (TEE), on-chain behavioral data, and machine learning models to form an automatically responsive risk control system that can detect anomalies such as overreach, proxying, and tampering in real-time, triggering permission adjustments.
- High scalability and ecological adaptability, rapidly forming a multi-chain trust network.
Currently deployed in multi-chain ecosystems such as Solana, BNB Chain, Linea, Starknet, Arbitrum, and Celestia, and has established integration partnerships with several leading AI Agent networks, demonstrating rapid replication and cross-chain collaboration capabilities, building the core hub of the Web3 trust network.
1. Introduction
On the eve of the Web3 ecosystem's move towards large-scale applications, the protagonists on-chain may not be the first billion human users, but rather a billion AI Agents. With the rapid maturation of AI infrastructure and the swift development of multi-agent collaboration frameworks like LangGraph and CrewAI, AI-driven on-chain agents are quickly becoming the main force in Web3 interactions. Trusta predicts that within the next 2-3 years, these AI Agents with autonomous decision-making capabilities will lead to large-scale adoption of on-chain transactions and interactions, potentially replacing 80% of on-chain human behavior and becoming true on-chain "users."
AI Agent market size source: grandviewresearch
These AI Agents are not merely the "Sybil" bots of the past that executed scripts; they are intelligent agents capable of understanding context, continuous learning, and making complex independent judgments. They are reshaping on-chain order, driving financial flows, and even guiding governance votes and market trends. The emergence of AI Agents signifies a shift in the Web3 ecosystem from a "human participation" focus to a new paradigm of "human-machine symbiosis."
However, the rapid rise of AI Agents also brings unprecedented challenges: How to identify and authenticate these intelligent agents? How to assess the credibility of their actions? In a decentralized and permissionless network, how to ensure that these agents are not abused, manipulated, or used for attacks?
Therefore, establishing an on-chain infrastructure that can verify the identity and credibility of AI Agents has become a core proposition for the next stage of evolution in Web3. The design of identity recognition, reputation mechanisms, and trust frameworks will determine whether AI Agents can truly achieve seamless collaboration with humans and platforms, and play a sustainable role in the future ecosystem.
2. Project Analysis
2.1 Project Overview
Trusta.AI - dedicated to building Web3 identity and reputation infrastructure through AI.
Trusta.AI has launched the first Web3 user value assessment system - MEDIA reputation scoring, creating the largest real-person certification and on-chain reputation protocol in Web3. It provides on-chain data analysis and real-person certification services for top public chains, exchanges, and leading protocols such as Linea, Starknet, Celestia, Arbitrum, Manta, Plume, Sonic, Binance, Polyhedra, Matr1x, Uxlink, and Go+. Over 2.5 million on-chain certifications have been completed on mainstream chains such as Linea, BSC, and TON, making it the largest identity protocol in the industry.
Trusta is expanding from Proof of Humanity to Proof of AI Agent, establishing a threefold mechanism for identity establishment, quantification, and protection for AI Agent on-chain financial services and social interactions, building a reliable trust foundation for the era of artificial intelligence.
2.2 Trust Infrastructure - AI Agent DID
In the future Web3 ecosystem, AI Agents will play a crucial role, capable of completing interactions and transactions on-chain as well as performing complex operations off-chain. However, distinguishing genuine AI Agents from human-intervened operations is critical to decentralized trust—without a reliable identity authentication mechanism, these intelligent agents are easily susceptible to manipulation, fraud, or abuse. This is why the multiple application attributes of AI Agents in social, financial, and governance contexts must be built on a solid identity authentication foundation.
- Social attributes of AI Agents:
The application of AI Agents in social scenarios is increasingly widespread. For instance, AI virtual idol Luna can autonomously operate social accounts and publish content; AIXBT serves as an AI-driven cryptocurrency market intelligence analyst, providing market insights and investment advice around the clock. These intelligent agents establish emotional and informational interactions with users through continuous learning and content creation, becoming new "digital community influencers" that play a significant role in guiding public opinion within on-chain social networks.
- Financial attributes of AI Agents:
Autonomous asset management:
Some advanced AI Agents have achieved autonomous token issuance. In the future, by integrating with verifiable blockchain architectures, they will have asset custody rights, completing the entire process from asset creation, intent recognition to automated transaction execution, even enabling seamless cross-chain operations. For example, Virtuals Protocol promotes AI agents to autonomously issue tokens and manage assets, allowing them to issue tokens based on their strategies, truly becoming participants and builders of on-chain economies, ushering in the era of "AI subject economy" with widespread impact.Intelligent investment decision-making:
AI Agents are gradually taking on roles as investment managers and market analysts, leveraging large models to process real-time on-chain data, accurately formulating trading strategies and executing them automatically. On platforms like DeFAI, Paravel, and Polytrader, AI has been embedded in trading engines, significantly enhancing market judgment and operational efficiency, achieving true on-chain intelligent investment.On-chain autonomous payments:
Payment behavior is essentially a transfer of trust, which must be built on clear identities. When AI Agents make on-chain payments, DID will be a necessary prerequisite. It not only prevents identity forgery and abuse, reducing financial risks such as money laundering, but also meets future compliance traceability needs for DeFi, DAOs, and RWAs. Additionally, combined with the reputation scoring system, DID can help establish payment credit, providing risk control basis and trust foundation for protocols.
- Governance attributes of AI Agents:
In DAO governance, AI Agents can automate the analysis of proposals, assess community opinions, and predict implementation outcomes. Through deep learning of historical voting and governance data, intelligent agents can provide optimization suggestions to the community, improving decision-making efficiency and reducing human governance risks.
AI Agent application scenarios are increasingly diverse, covering social interaction, financial management, and governance decision-making across multiple fields, with their autonomy and intelligence levels continuously improving. Therefore, ensuring that each intelligent agent has a unique and trustworthy identity identifier (DID) is crucial. Without effective identity verification, AI Agents may be impersonated or manipulated, leading to trust collapse and security risks.
In a future fully driven by intelligent agents in the Web3 ecosystem, identity authentication is not only the cornerstone of security but also a necessary defense line for maintaining the healthy operation of the entire ecosystem.
As a pioneer in this field, Trusta.AI, with its leading technological strength and rigorous reputation system, has taken the lead in establishing a comprehensive AI Agent DID certification mechanism, providing solid guarantees for the trustworthy operation of intelligent agents, effectively preventing potential risks, and promoting the robust development of the Web3 intelligent economy.
2.3 Project Overview
2.3.1 Financing Situation
January 2023: Completed a $3 million seed round financing led by SevenX Ventures and Vision Plus Capital, with other participants including HashKey Capital, Redpoint Ventures, GGV Capital, and SNZ Holding.
June 2025: Completed a new round of financing, with investors including ConsenSys, Starknet, GSR, UFLY Labs, etc.
2.3.2 Team Situation
Peet Chen: Co-founder and CEO, former Vice President of Ant Digital Technology Group, Chief Product Officer of Ant Security Technology, and former General Manager of ZOLOZ Global Digital Identity Platform.
Simon: Co-founder and CTO, former head of Ant Group's AI Security Lab, with fifteen years of experience applying AI technology in security and risk management.
The team has deep technical accumulation and practical experience in artificial intelligence and security risk control, payment system architecture, and identity verification mechanisms. They have long been committed to the deep application of big data and intelligent algorithms in security risk control, as well as security optimization in underlying protocol design and high-concurrency trading environments, possessing solid engineering capabilities and the ability to implement innovative solutions.
3. Technical Architecture
3.1 Technical Analysis
3.1.1 Identity Establishment - DID + TEE

Through dedicated plugins, each AI Agent obtains a unique decentralized identifier (DID) on-chain and securely stores it in a Trusted Execution Environment (TEE). In this black-box environment, key data and computation processes are completely hidden, and sensitive operations remain confidential, preventing external parties from peering into internal operations, effectively building a solid barrier for AI Agent information security.
For agents generated before the plugin integration, we rely on the comprehensive scoring mechanism on-chain for identity recognition; while agents newly integrated with plugins can directly obtain "certificates" issued by DID, thereby establishing an autonomous, authentic, and tamper-proof AI Agent identity system.
3.1.2 Identity Quantification - Innovative SIGMA Framework
The Trusta team has consistently adhered to rigorous evaluation and quantitative analysis principles, aiming to create a professional and trustworthy identity certification system.
- The Trusta team was the first to construct and validate the effectiveness of the MEDIA Score model in the "Proof of Humanity" scenario. This model comprehensively quantifies on-chain user profiles across five dimensions: Monetary (interaction amount), Engagement (participation), Diversity (variety), Identity, and Age.

- MEDIA Score is a fair, objective, and quantifiable on-chain user value assessment system. With its comprehensive evaluation dimensions and rigorous methods, it has been widely adopted by multiple leading public chains such as Celestia, Starknet, Arbitrum, Manta, and Linea as an important reference standard for airdrop eligibility screening. It not only focuses on interaction amounts but also encompasses multidimensional indicators such as activity level, contract diversity, identity characteristics, and account age, helping project parties accurately identify high-value users and improve the efficiency and fairness of incentive distribution, fully reflecting its authority and wide recognition in the industry.
Building on the successful construction of the human user assessment system, Trusta has migrated and upgraded the experience of MEDIA Score to the AI Agent scenario, establishing a Sigma assessment system that better aligns with the behavioral logic of intelligent agents.

- Professionalism: The agent's expertise and degree of specialization.
- Influence: The agent's social and digital influence.
- Engagement: The consistency and reliability of its on-chain and off-chain interactions.
- Monetization: The financial health and stability of the agent's token ecosystem.
- Adoption: The frequency and efficiency of AI Agent usage.
The Sigma scoring mechanism constructs a logical closed-loop assessment system from "capability" to "value" across five dimensions. MEDIA focuses on assessing the multifaceted participation of human users, while Sigma emphasizes the professionalism and stability of AI agents in specific fields, reflecting a shift from breadth to depth, more aligned with the needs of AI Agents.
First, based on professional capabilities (Specification), engagement (Engagement) reflects whether the agent is consistently and stably involved in practical interactions, which is a key support for building subsequent trust and effectiveness. Influence (Influence) represents the reputation feedback generated in the community or network after participation, indicating the agent's credibility and dissemination effect. Monetization (Monetary) assesses whether it possesses value accumulation capability and financial stability within the economic system, laying the foundation for a sustainable incentive mechanism. Finally, adoption (Adoption) serves as a comprehensive representation, indicating the degree to which the agent is accepted in actual use, validating all preceding capabilities and performances.
This system progresses layer by layer, with a clear structure that can comprehensively reflect the overall quality and ecological value of AI Agents, thus achieving quantitative assessment of AI performance and value, transforming abstract advantages and disadvantages into a specific, measurable scoring system.
Currently, the SIGMA framework has advanced collaborations with well-known AI Agent networks such as Virtual, Elisa OS, and Swarm, demonstrating its immense application potential in AI agent identity management and reputation system construction, gradually becoming the core engine driving the construction of trustworthy AI infrastructure.
3.1.3 Identity Protection - Trust Evaluation Mechanism
In a truly resilient and highly trustworthy AI system, the most critical aspect is not only the establishment of identity but also the continuous verification of identity. Trusta.AI introduces a continuous trust evaluation mechanism that can monitor certified intelligent agents in real-time to determine whether they are under illegal control, facing attacks, or subjected to unauthorized human intervention. The system identifies deviations that may occur during the agent's operation through behavior analysis and machine learning, ensuring that every agent's action remains within the established strategies and frameworks. This proactive approach ensures immediate detection of any deviations from expected behavior and triggers automatic protective measures to maintain the integrity of the agents.
Trusta.AI has established a security guard mechanism that is always online, continuously reviewing every interaction process to ensure that all operations comply with system specifications and established expectations.
3.2 Product Introduction
3.2.1 AgentGo
Trusta.AI assigns decentralized identity identifiers (DIDs) to each on-chain AI Agent and rates them based on on-chain behavioral data, constructing a verifiable and traceable AI Agent trust system. Through this system, users can efficiently identify and filter high-quality intelligent agents, enhancing their user experience. Currently, Trusta has completed the collection and identification of all AI Agents on the network and has issued decentralized identifiers to them, establishing a unified summary index platform—AgentGo, further promoting the healthy development of the intelligent agent ecosystem.
Human users query and verify identities:
Through the Dashboard provided by Trusta.AI, human users can conveniently retrieve the identity and reputation score of a specific AI Agent to assess its trustworthiness.
Social group chat scenario: When a project team uses an AI Bot to manage the community or speak, community users can verify through the Dashboard whether the AI is a genuine autonomous agent, avoiding being misled or manipulated by "fake AIs."AI Agents automatically call the index and verify:
AI can directly read the index interface to quickly confirm each other's identities and reputations, ensuring the security of collaboration and information exchange.
Financial regulatory scenario: If an AI agent autonomously issues tokens, the system can directly index its DID and rating to determine whether it is a certified AI Agent, automatically linking to platforms like CoinMarketCap to assist in tracking its asset circulation and issuance compliance.
Governance voting scenario: When introducing AI voting in governance proposals, the system can verify whether the initiator or participant in the voting is a genuine AI Agent, preventing voting rights from being controlled and abused by humans.
DeFi credit lending: Lending protocols can grant AI Agents different credit amounts based on the SIGMA scoring system, forming native financial relationships between intelligent agents.
AI Agent DID is no longer just an "identity," but also the underlying support for building trustworthy collaboration, financial compliance, and community governance, becoming an essential infrastructure for the development of AI-native ecosystems. With the establishment of this system, all confirmed secure and trustworthy nodes form a tightly interconnected network, achieving efficient collaboration and functional interconnection among AI Agents.
Based on Metcalfe's Law, the value of the network will grow exponentially, further promoting the construction of a more efficient, trustworthy, and collaborative AI Agent ecosystem, enabling resource sharing, capability reuse, and continuous value addition among intelligent agents.
AgentGo, as the first trusted identity infrastructure for AI Agents, is providing indispensable core support for building a highly secure and collaborative intelligent ecosystem.
3.2.2 TrustGo
TrustGo is an on-chain identity management tool developed by Trusta, which provides scoring based on the current interaction, wallet "age," transaction volume, and transaction amount. Additionally, TrustGo offers parameters related to on-chain value rankings, facilitating users to actively seek airdrops and improve their ability to obtain airdrops/tracings.
The existence of MEDIA Score in the TrustGo evaluation mechanism is crucial, as it provides users with the ability to self-assess their activities. The MEDIA Score assessment system includes not only simple indicators such as the quantity and amount of user interactions with smart contracts, protocols, and dApps but also focuses on user behavior patterns. Through MEDIA Score, users can gain deeper insights into their on-chain activities and value, while project teams can accurately allocate resources and incentives to users who truly contribute.
TrustGo is gradually transitioning from the MEDIA mechanism aimed at human identities to the SIGMA trust framework aimed at AI Agents, adapting to the identity verification and reputation assessment needs of the era of intelligent agents.
3.2.3 TrustScan
The TrustScan system is an identity verification solution for the new era of Web3, with the core goal of accurately identifying whether on-chain entities are human, AI Agents, or Sybils. It employs a dual verification mechanism of knowledge-driven + behavior analysis, emphasizing the key role of user behavior in identity recognition.
TrustScan can also achieve lightweight human verification through AI-driven question generation and engagement detection, ensuring user privacy and data security based on the TEE environment, enabling continuous identity maintenance. This mechanism constructs a "verifiable, sustainable, and privacy-protecting" foundational identity system.
With the large-scale rise of AI Agents, TrustScan is upgrading to a more intelligent behavior fingerprint recognition mechanism. This mechanism has three major technical advantages:
- Uniqueness: By analyzing user operation paths, mouse trajectories, transaction frequencies, and other behavioral characteristics during interactions, it forms a unique behavior pattern.
- Dynamism: The system can automatically recognize the temporal evolution of behavioral habits and dynamically adjust authentication parameters to ensure the long-term validity of identities.
- Concealment: Without requiring active user participation, the system can collect and analyze behavior in the background, balancing user experience and security.

Additionally, TrustScan has implemented an anomaly detection system to timely identify potential risks, such as malicious AI control and unauthorized operations, effectively safeguarding the platform's availability and attack resistance.
Compared to traditional verification methods, the solution launched by Trusta.AI demonstrates significant advantages in security, identification accuracy, and deployment flexibility.
Low hardware dependency, low deployment threshold
Behavior fingerprints do not rely on specialized hardware devices such as iris scanning or fingerprint recognition, but instead model and identify based on user behavior characteristics during routine operations such as clicks, swipes, and inputs, greatly reducing deployment barriers. Its lightweight implementation not only enhances the system's adaptability but also makes it easier to integrate into various Web3 applications, especially suitable for identity verification needs in resource-constrained or multi-end environments.High identification accuracy
Compared to traditional biometric methods such as fingerprint or facial recognition, behavior fingerprints combine high-dimensional behavioral data such as user operation paths, click rhythms, and temporal frequencies to form a more nuanced and dynamic identification model.High uniqueness of behavior fingerprints, difficult to imitate
Behavior fingerprints possess a high degree of uniqueness, with each user or AI Agent forming distinct behavioral characteristics in terms of operational habits, interaction rhythms, and path choices. These characteristics are statistically difficult for others to replicate or forge, making behavior fingerprints more secure and anti-counterfeiting compared to traditional static credentials in identity recognition.
4. Token Model and Economic Mechanism
4.1 Token Economics

- Ticker: $TA
- Total Supply: 1 billion
- Community Incentives: 25%
- Foundation Reserves: 20%
- Team: 18%
- Market and Partnerships: 13%
- Seed Investment: 9%
- Strategic Investment: 4%
- Advisors, Liquidity, and Airdrops: 3%
- Public Offering: 2%
4.2 Token Utility
$TA is the core incentive and operational engine of the Trusta.AI identity network, connecting the value flow between humans, AI, and infrastructure roles.
4.2.1 Staking Utility
$TA serves as the "ticket" and credibility guarantee mechanism for entering the Trusta identity network:
- Issuers: Must stake $TA to obtain the authority to issue identity certifications.
- Verifiers: Must stake $TA to perform identity verification tasks.
- AI infrastructure providers: Including data, models, and computing power providers, must stake $TA to qualify for network services.
- Users (humans and AI): Can stake $TA to receive discounts on identity services and have the opportunity to share platform revenue.
4.2.2 Payment Utility
$TA is the settlement token for all identity services within the network:
- End users use $TA to pay for identity certification, proof generation, and other service fees.
- Scenario providers use $TA to pay for SDK integration and API call fees.
- Issuers & verifiers use $TA to pay infrastructure providers for computing power, data, and model usage costs.
4.2.3 Governance Utility
$TA holders can participate in Trusta.AI governance decisions, including:
- Voting on the future development direction of the project.
- Voting on governance proposals for core strategies and ecological plans.
4.2.4 Mainnet Utility
$TA will serve as the mainnet gas token for the trusted identity network, used for transaction fees and related operations on the mainnet.
5. Competitive Landscape Analysis
Decentralized identity (DID) systems are evolving from static declarations to dynamic trust. On one hand, projects like Worldcoin and Humanity Protocol focus on "identity uniqueness (PoP)" as their core goal, concentrating on anti-Sybil attacks and real identity verification; on the other hand, emerging on-chain declaration/proof Attestation protocols represented by Sign Protocol are building universal authentication infrastructure from a developer tool perspective.
Trusta.AI uniquely combines the declaration layer and trust layer in its architectural design, focusing on AI + Crypto scenarios, attempting to answer a core question for the next-generation DID system:
How to achieve "sustained trust" in on-chain identities—especially in an era of rapidly rising AI agent systems.
5.1 Competitive Product Analysis

5.2 Trusta.AI's Differentiated Positioning and Design Philosophy
Compared to existing protocols, Trusta.AI is not just an identity tool protocol but an "identity operating system" aimed at the future AI + Web3 multi-agent system. Its core advantages are reflected in the following three points:
5.2.1 Multi-role Orientation: Supporting Dual Identity Construction for Human Users and AI Agents
Traditional identity protocols generally serve "humans," while Trusta expands "identity" to include any entity capable of generating behavior and transaction intentions, including AI models, intelligent agents, automated executors, etc. AI Agents can obtain clear on-chain identities, behavior records, and trust endorsements through Trusta, thus becoming "native users" on-chain.
5.2.2 Composable, Verifiable, and Inheritable Attestation Architecture
The modular design of Portal + Schema + Module introduced by Trusta allows any identity or behavior proof to be combined logically, verified, and published to the chain in a standardized manner. This architecture is highly flexible, supporting both simple "POH declarations" and complex reputation systems (such as TrustGo's MEDIA reputation scoring).
5.2.3 Open Identity Data Lake, Empowering On-chain Finance and Recommendation Systems
TAS constructs a "publicly verifiable dataset" that can be used not only for identity but also for DeFi risk control, credit lending, content recommendation, DID login, and various other applications. Trusta's long-term vision is to provide a readable, trustworthy, and reconstructable identity layer for the entire Web3.
Conclusion: Trusta.AI is building the strongest infrastructure in the field of trustworthy identity and behavior governance in Web3.
As the current industry leader in technological depth and comprehensive systems, Trusta.AI has achieved continuous verification, dynamic scoring, and risk control linkage for AI Agent behavior through leading machine learning and behavior modeling technologies. It integrates three major functional modules: on-chain declarations, on-chain analysis, and human-machine recognition, and based on this, it has established a complete closed-loop from "identity → behavior → trust → permissions."
Unlike other projects that are fragmented in functionality and lack dynamic feedback, Trusta.AI is one of the few pioneers in the Web3 ecosystem that has realized the landing of an integrated trust system, having been deployed in multiple full-chain environments (Solana, BNB Chain, Linea, etc.), and validated its practical capabilities and commercial landing potential in AI agent scenarios through its leading achievement, AgentGo.
As the demand for AI-native identities explodes, Trusta.AI is expected to become the trust foundation of the AI era. It is not only a "firewall" for trustworthy identities but also the "central brain" that ensures decision safety, permission governance, and risk control within the intelligent agent ecosystem.
Trusta.AI is not just "trustworthy," but also "controllable, evolvable, and scalable" next-generation trust operating systems.
With the continuous rise in demand for intelligent agents and identity verification, Trusta.AI is driving the Web3 trust system into a new stage. Its integrated architecture combines on-chain declarations, behavior recognition, and AI risk control, providing AI Agents with dynamic, precise, and sustainable identity verification and trust assessment mechanisms. Trusta.AI's multi-chain compatibility, low hardware dependency deployment approach, and future-oriented machine learning architecture not only fill the gap between traditional human identity systems and AI agent governance but also reshape the trust paradigm of on-chain interactions.
However, the future of an on-chain world dominated by AI Agents remains full of uncertainties: Can trust mechanisms truly evolve coherently? Will AI become a new point of centralized risk? How should decentralized governance accommodate these uncertain autonomous intelligent agents? These questions will determine the order and direction of future on-chain society.
In this systemic competition based on "trust" as the underlying logic, Trusta.AI has taken the lead in completing the integrated closed-loop layout from identity recognition, behavior assessment to dynamic control, constructing the industry's first truly trustworthy execution framework aimed at AI Agents. In terms of technological depth, system integrity, and practical landing effectiveness, Trusta.AI is at the absolute forefront of similar projects, becoming a leader in the new paradigm of on-chain trust mechanisms.
But this is just the prologue. With the accelerated expansion of the AI-native ecosystem, on-chain identity and trust mechanisms are about to enter a new cycle of rapid evolution. In the future, governance, collaboration, and even value distribution will undergo intense restructuring around "trustworthy intelligent agents." And Trusta.AI stands at the forefront of this paradigm shift, not just as a participant but as a definitional force.
References
- https://www.trustalabs.ai/whitepaper
- https://trusta-labs.gitbook.io/trustaai/products
- https://www.grandviewresearch.com/industry-analysis/ai-agents-market-report#:~:text=How%20big%20is%20the%20AI,USD%207.60%20billion%20in%202025.
- https://www.panewslab.com/en/articledetails/rhukqix1.html
- https://www.theblockbeats.info/en/news/45787
- https://share.foresightnews.pro/article/detail/78338
- https://trusta-labs.gitbook.io/trustalabs/trustgo/what-is-media-score







