Author | Henry @IOSG
Introduction
In just 3 months, the market capitalization of AI x memecoins has reached $13.4 billion, comparable to some mature L1s like AVAX or SUI.
In fact, the relationship between artificial intelligence and blockchain has a long history, from the early decentralized model training on the Bittensor subnet, to decentralized GPU/computing resource markets like Akash and io.net, and now to the current wave of AI x memecoins and frameworks on Solana. Each stage indicates that cryptocurrencies can supplement artificial intelligence to some extent through resource aggregation, achieving sovereign AI and consumer use cases.
In the first wave of Solana AI coin trends, some brought meaningful utility rather than pure speculation. We saw frameworks like ELIZA from ai16z emerge, and AI agents like aixbt providing market analysis and content creation, as well as toolkits integrating AI with blockchain capabilities.
In the second wave of AI, as more tools mature, applications have become key value drivers, and DeFi has become the perfect testing ground for these innovations. For simplicity, we will refer to the combination of AI and DeFi in this study as "DeFai."
According to CoinGecko data, the market capitalization of DeFai is approximately $1 billion. Griffian holds a 45% market share, while $ANON accounts for 22%. This sector began to experience rapid growth after December 25, coinciding with strong growth in frameworks and platforms like Virtual and ai16z after the Christmas holiday.
▲ Source: Coingecko.com
This is just the first step; the potential of DeFai goes far beyond this. Although DeFai is still in the proof-of-concept stage, we cannot underestimate its potential. It will leverage the intelligence and efficiency that AI can provide to transform the DeFi industry into a more user-friendly, intelligent, and efficient financial ecosystem.
Before delving into the world of DeFai, we need to understand how agents operate in DeFi/blockchain.
How Agents Operate in DeFi Systems
AI agents refer to programs that can perform tasks on behalf of users according to workflows. The core of AI agents is LLM (large language models), which can respond based on the knowledge they have been trained or learned, but these responses are often limited.
Agents are fundamentally different from robots. Robots are typically designed for specific tasks, require human supervision, and operate under predefined rules and conditions. In contrast, agents are more dynamic and adaptive, capable of self-learning to achieve specific goals.
To create a more personalized experience and comprehensive responses, agents can store past interactions in memory, allowing them to learn from user behavior patterns and adjust their responses, generating tailored suggestions and strategies based on historical context.
In blockchain, agents can interact with smart contracts and accounts to handle complex tasks without continuous human intervention. For example, in simplifying the DeFi user experience, this includes one-click execution of multi-step bridging and farming, optimizing farming strategies for higher returns, executing trades (buy/sell), and conducting market analysis, all of which are done autonomously.
According to research by @3sigma, most models follow six specific workflows:
- Data Collection
- Model Inference
- Decision Making
- Hosting and Operation
- Interoperability
- Wallet
1. Data Collection
First, the model needs to understand its working environment. Therefore, it requires multiple data streams to keep the model in sync with market conditions. This includes: 1) On-chain data from indexers and oracles 2) Off-chain data from price platforms, such as data APIs from CMC/Coingecko/other data providers.
2. Model Inference
Once the model has learned about the environment, it needs to apply this knowledge to make predictions or execute actions based on new, unrecognized input data from users. The models used by agents include:
- Supervised and Unsupervised Learning: Models trained on labeled or unlabeled data to predict outcomes. In the blockchain context, these models can analyze governance forum data to predict voting outcomes or identify trading patterns.
- Reinforcement Learning: Models that learn through trial and error by evaluating the rewards and penalties of their actions. Applications include optimizing token trading strategies, such as determining the best entry points for buying tokens or adjusting farming parameters.
- Natural Language Processing (NLP): Techniques for understanding and processing human language input, which is very valuable for scanning governance forums and proposals for insights.
▲ Source: https://www.researchgate.net/figure/The-main-types-of-machine-learning-Main-approaches-include-classification-and-regressionfig1354960266
3. Decision Making
With trained models and data, agents can leverage their decision-making capabilities to take action. This includes interpreting the current situation and responding appropriately.
At this stage, the optimization engine plays a crucial role in seeking the best outcomes. For example, before executing yield strategies, agents need to balance various factors such as slippage, spreads, transaction costs, and potential profits.
Since a single agent may not be able to optimize decisions across different domains, multi-agent systems can be deployed for coordination.
4. Hosting and Operation
Due to the computationally intensive nature of tasks, AI agents typically host their models off-chain. Some agents rely on centralized cloud services like AWS, while those leaning towards decentralization use distributed computing networks like Akash or ionet, as well as Arweave for data storage.
Although AI agent models operate off-chain, agents need to interact with on-chain protocols to execute smart contract functions and manage assets. This interaction requires secure key management solutions, such as MPC wallets or smart contract wallets, to handle transactions securely. Agents can operate via APIs, communicating and interacting with their communities on social platforms like Twitter and Telegram.
5. Interoperability
Agents need to interact with various protocols while staying updated across different systems. They typically use API bridges to obtain external data, such as price feeds.
To stay informed about the current protocol status and respond appropriately, agents need to synchronize in real-time through decentralized messaging protocols like webhooks or IPFS.
6. Wallet
Agents require a wallet or access to private keys to initiate blockchain transactions. There are two common wallet/key management approaches in the market: MPC-based and TEE-based solutions.
For portfolio management applications, MPC or TSS can split keys among agents, users, and trusted parties, while users still maintain a degree of control over the AI. The Coinbase AI Replit wallet effectively implements this approach, demonstrating how to utilize AI agents to achieve MPC wallets.
For fully autonomous AI systems, TEE provides an alternative solution by storing private keys in a secure enclave, allowing the entire AI agent to operate in a concealed and protected environment, free from third-party interference. However, TEE solutions currently face two major challenges: hardware centralization and performance overhead.
Once these challenges are overcome, we will be able to create autonomous agents on the blockchain, where different agents can perform their roles in the DeFi ecosystem to enhance efficiency and improve the on-chain trading experience.
Overall, I tentatively categorize DeFi x AI into four main categories:
- Abstract/User Experience Friendly AI
- Yield Optimization or Portfolio Management
- Market Analysis or Prediction Bots
- DeFai Infrastructure/Platforms
Opening the Door to the World of DeFi x AI — DeFai
▲ Source: IOSG Venture
#1. Abstract/User Experience Friendly AI
The purpose of artificial intelligence is to enhance user efficiency, solve complex problems, and simplify intricate tasks. Abstract-based AI can simplify the complexities of accessing DeFi for both novice and existing traders.
In the blockchain space, effective AI solutions should be able to:
- Automatically execute multi-step trades and staking operations without requiring users to have any industry knowledge;
- Conduct real-time research, providing users with all the necessary information and data to make informed trading decisions;
- Gather data from various platforms, identify market opportunities, and provide users with comprehensive analysis.
Most of these abstract tools are centered around ChatGPT. While these models need to be seamlessly integrated with blockchain, it seems that no model has been specifically trained or adapted based on blockchain data.
The concept was proposed by Tony, the founder of Griffain, during a Solana hackathon. He later transformed this idea into a functional product, gaining support and recognition from Solana founder Anatoly.
In simple terms, Griffain is currently the first and most powerful abstract AI on Solana, capable of executing functions such as swaps, wallet management, NFT minting, and token sniping.
Here are the specific features offered by Griffain:
- Execute trades using natural language
- Issue tokens using pumpfun, mint NFTs, and optionally select addresses for airdrops
- Multi-agent coordination
- Agents can tweet on behalf of users
- Sniping newly launched memecoins on pumpfun based on specific keywords or conditions
- Staking, automating, and executing DeFi strategies
- Scheduling tasks, allowing users to input commands to create customized agents
- Gathering data from platforms for market analysis, such as identifying token holder distributions
Despite the numerous features offered by Griffain, users still need to manually input token addresses or provide specific execution instructions to the agent. Therefore, there is still room for optimization in the current product for beginners who are not familiar with these technical commands.
So far, Griffain offers two types of AI agents: Personal AI Agents and Special Agents.
Personal AI Agents are controlled by users. Users can customize instructions and input memory settings to tailor the agent to their individual circumstances.
Special Agents are designed for specific tasks. For example, the "Airdrop Agent" can be trained to find addresses and allocate tokens to specified holders, while the "Staking Agent" can be programmed to stake SOL or other assets into asset pools for mining strategies.
A notable feature of Griffain is its multi-agent collaboration system, where multiple agents can work together in a chat room. These agents can independently solve complex tasks while maintaining collaboration.
▲ Source: https://griffain.com
After account creation, the system generates a wallet, and users can delegate their accounts to agents, allowing them to autonomously execute trades and manage portfolios.
The keys are split using Shamir Secret Sharing, ensuring that neither Griffain nor Privy can host the wallet. According to Slate's introduction, SSS works by splitting the key into three parts, including:
- Device Share: Stored in the browser and retrieved when the tab is opened
- Authorization Share: Stored on Privy servers and retrieved during verification and login to applications
- Recovery Share: Encrypted and stored on Privy servers, only decrypted and accessed when the user inputs a password to log into the tab
Additionally, users can choose to export or revoke access on the Griffain frontend.
Anon was established by Daniele Sesta, who is known for creating the DeFi protocol Wonderland and MIM (Magic Internet Money). Similar to Griffain, Anon aims to simplify user interactions with DeFi.
While the team has introduced its future features, no functionalities have been validated yet as the product is not publicly available. Some of the features include:
- Execute trades using natural language (including multiple languages such as Chinese)
- Cross-chain bridging via LayerZero
- Lending and supplying with partner protocols like Aave, Sparks, Sky, and Wagmi
- Obtaining real-time price and data information through Pyth
- Providing automatic execution and triggers based on time and gas prices
- Offering real-time market analysis, such as sentiment detection and social profile analysis
In addition to core functionalities, Anon supports various AI models, including Gemma, Llama 3.1, Llama 3.3, Vision, Pixtral, and Claude. These models have the potential to provide valuable market analysis, helping users save research time and make informed decisions, which is particularly valuable in a market where new tokens with a market cap of $100 million emerge daily.
Wallets can be exported and permissions revoked, but specific details regarding wallet management and security protocols have not been disclosed.
In addition to core functionalities, Anon supports various AI models, including Gemma, Llama 3.1, Llama 3.3, Vision, Pixtral, and Claude.
Furthermore, Daniele recently released two updates regarding Anon:
- Automate Framework:
A TypeScript framework that helps more projects integrate with Anon faster. This framework will require all data and interactions to follow a predefined structure, allowing Anon to reduce the risk of AI hallucinations and become more reliable.
- Gemma:
A research agent that can collect real-time data from on-chain DeFi metrics (such as TVL, trading volume, prepdex funding rates) and off-chain data (such as Twitter and Telegram) for social sentiment analysis. This data will be transformed into opportunity alerts and tailored insights for users.
From the documentation, this positions Anon as one of the most anticipated and powerful abstract tools in the entire field. This is particularly valuable in a market where new tokens with a market cap of $100 million emerge daily.
With the support of BigBrain Holdings, Slate positions itself as "Alpha AI," capable of autonomously trading based on on-chain signals. Currently, Slate is the only abstract AI that can automatically execute trades on Hyperliquid.
Slate prioritizes price routing, fast execution, and can simulate trades before execution. Key features include:
- Cross-chain swaps between EVM chains and Solana
- Automated trading based on price, market cap, gas fees, and profit/loss metrics
- Natural language task scheduling
- On-chain trade aggregation
- Telegram notification system
- Ability to open long and short positions, repay under specific conditions, LP management + mining, including execution on Hyperliquid
Overall, its fee structure is divided into two types:
- Regular Operations: Slate does not charge fees for regular transfers/withdrawals, but charges a fee of 0.35% for operations such as swaps, bridging, claims, borrowing, lending, repayments, staking, unstaking, going long, going short, locking, and unlocking.
- Conditional Operations: If conditional orders (such as limit orders) are set. If based on gas fee conditions, Slate will charge a 0.25% fee; all other conditional fees will be 1.00%.
In terms of wallets, Slate integrates Privy's embedded wallet architecture, ensuring that neither Slate nor Privy will host users' wallets. Users can connect their existing wallets or authorize agents to execute trades on their behalf.
▲ Source: https://docs.slate.ceo
Comparing mainstream abstract AIs:
▲ Source: IOSG Venture
Currently, most AI abstract tools support cross-chain trading and asset bridging between Solana and EVM chains. Slate offers Hyperliquid integration, while Neur and Griffin currently only support Solana but are expected to add cross-chain support soon.
Most platforms integrate Privy's embedded wallet and EOA wallets, allowing users to manage funds independently, but they require user authorization for agents to access and execute certain transactions. This provides an opportunity for TEE (Trusted Execution Environment) to ensure the tamper-proof nature of AI systems.
Although most AI abstract tools share functionalities such as token issuance, trade execution, and natural language conditional orders, their performance differences are significant.
At the product level, we are still in the early stages of abstract AI. By comparing the five projects mentioned above, Griffin stands out due to its rich feature set, extensive collaboration network, and multi-agent collaborative workflow (Orbit is another project that supports multi-agent collaboration). Anon excels with its quick response, multilingual support, and Telegram integration, while Slate benefits from its complex automation platform and is the only agent supporting Hyperliquid.
However, among all abstract AIs, some platforms still face challenges when handling basic transactions (such as USDC Swap), such as being unable to accurately retrieve the correct token address or price, or failing to analyze the latest market trends. Response time, accuracy, and result relevance are also important differentiating factors in measuring the fundamental performance of models. In the future, we hope to collaborate with teams to develop a transparent dashboard to track the performance of all abstract AIs in real-time.
#2, Autonomous Yield Optimization and Portfolio Management
Unlike traditional yield strategies, protocols in this field use AI to analyze on-chain data for trend analysis and provide insights to help teams develop better yield optimization and portfolio allocation strategies.
To reduce costs, models are often trained on the Bittensor subnet or off-chain. To enable AI to autonomously execute trades, verification methods such as ZKP (Zero-Knowledge Proof) are employed to ensure the honesty and verifiability of the models. Here are a few examples of protocols optimizing yield in DeFAI:
T3AI is a lending protocol that supports non-collateralized borrowing, using AI as an intermediary and risk engine. Its AI agent monitors loan health in real-time and ensures loan repayability through T3AI's risk indicator framework. At the same time, AI provides precise risk predictions by analyzing the relationships between different assets and their price change trends. The AI in T3AI specifically performs:
- Analyzing price data from major CEXs and DEXs;
- Measuring the volatility of different assets;
- Studying the correlation and interactivity of asset prices;
- Discovering hidden patterns in asset interactions.
AI will suggest optimal allocation strategies based on the user's portfolio and potentially achieve autonomous AI portfolio management after model adjustments. Additionally, T3AI ensures the verifiability and reliability of all operations through ZK proofs and a validator network.
▲ Source: https://www.trustinweb3.xyz/
Kudai is an experimental GMX ecosystem agent developed by GMX Blueberry Club using the EmpyrealSDK toolkit, with its token currently trading on the Base network.
Kudai's philosophy is to use all transaction fees generated by $KUDAI to fund agents for autonomous trading operations and distribute profits to token holders.
In the upcoming Phase 2/4, Kudai will be able to interpret natural language on Twitter:
- Buy and stake $GMX to generate new income streams;
- Invest in GMX GM pools to further enhance yields;
- Purchase GBC NFTs at the lowest price to expand the portfolio.
After this phase, Kudai will be fully autonomous, capable of independently executing leveraged trades, arbitrage, and earning asset returns (interest). No further information has been disclosed by the team.
Sturdy Finance is a lending and yield aggregator that utilizes AI models trained by Bittensor SN10 subnet miners to optimize yields by transferring funds between different whitelisted silo pools.
Sturdy employs a two-layer architecture consisting of independent asset pools (silo pools) and an aggregator layer:
- Independent Asset Pools (Silo Pools)
These are single-asset isolation pools where users can only borrow a single asset or use a single collateral type for borrowing.
- Aggregator Layer
The aggregator layer is built on Yearn V3, allocating user assets to whitelisted independent asset pools based on utilization and yield. The Bittensor subnet provides the best allocation strategies for the aggregator. When users deposit assets into the aggregator, they are only exposed to the selected collateral type, completely avoiding risks from other lending pools or collateral assets.
▲ Source: https://sturdy.finance
As of the writing of this article, Sturdy V2's TVL has been declining since May 2024, with the current TVL of the aggregator at approximately $3.9 million, accounting for 29% of the total TVL of the protocol.
Since September 2024, Sturdy's daily active users have consistently remained in double digits (>100), with pxETH and crvUSD being the main lending assets in the aggregator. However, the protocol's performance has noticeably stagnated over the past few months. The integration of AI seems to be aimed at hoping to reignite the growth momentum of the protocol.
▲ Source: https://dune.com/tk-research/sturdy-v2
#3, Market Analysis Agents
#Aixbt
Aixbt is a market sentiment tracking agent that aggregates and analyzes data from over 400 Twitter KOLs. With its proprietary engine, AixBT can identify real-time trends and publish market observations around the clock.
Among existing AI agents, AixBT holds a significant 14.76% market share, making it one of the most influential agents in the ecosystem.
▲ Source: Kaito.com
Aixbt is designed for social media interaction, and the insights it publishes directly reflect the market's attention focus.
Its functionality is not limited to providing market insights (alpha) but also includes interactivity. AixBT can respond to user questions and even issue tokens using a professional toolkit via Twitter. For example, the $CHAOS token was created through collaboration between AixBT and another interactive bot, Simi, using the @EmpyrealSDK toolkit.
As of now, users holding 600,000 $AIXBT tokens (worth approximately $240,000) can access its analysis platform and terminal.
#4, Decentralized AI Infrastructure and Platforms
The existence of Web3 AI Agents relies on decentralized infrastructure support. These projects not only provide support for model training and inference but also offer data, verification methods, and coordination layers to drive the development of AI agents.
Whether in Web2 or Web3, models, computing power, and data are always the three cornerstones driving the excellent development of large language models (LLMs) and AI agents. Open-source models trained in a decentralized manner will be favored by agent builders, as this approach completely eliminates the single-point risks associated with centralization and opens up possibilities for user-owned AI. Developers do not need to rely on LLM APIs from Web2 AI giants like Google, Meta, and OpenAI.
Here is the AI infrastructure diagram drawn by Pinkbrains:
▲ Source: Pink Brains
Pioneering institutions such as Nous Research, Prime Intellect, and Exo Labs are pushing the boundaries of decentralized training.
Nous Research's Distro training algorithm and Prime Intellect's DiLoco algorithm have successfully trained models with over 10 billion parameters in low-bandwidth environments, indicating that large-scale training can also be achieved outside traditional centralized systems. Exo Labs has further advanced this by launching the SPARTA distributed AI training algorithm, which reduces communication between GPUs by over 1000 times.
Bagel aims to become a decentralized HuggingFace, providing models and data for AI developers while addressing ownership and monetization issues of open-source data through cryptographic technology. Bittensor has built a competitive market where participants can contribute computing power, data, and intelligence to accelerate the development of AI models and agents.
Many believe that AixBT can stand out in the practical agent category, primarily due to its ability to acquire high-quality datasets.
Providers such as Grass, Vana, Sahara, Space and Time, and Cookie DAOs supply high-quality, domain-specific data or allow AI developers to access data "walled gardens," thereby enhancing their capabilities. By leveraging over 2.5 million nodes, Grass can scrape up to 300 TB of data daily.
Currently, Nvidia can only train its video models on 20 million hours of video data, while Grass's video dataset is 15 times larger (300 million hours) and grows by 4 million hours daily—meaning that 20% of Nvidia's total dataset is being collected by Grass every day. In other words, Grass can acquire an equivalent amount of data to Nvidia's total video dataset in just 5 days.
Without computing resources, agents cannot operate. Computing power markets like Aethir and io.net provide cost-effective options for agent developers by aggregating various GPUs. Hyperbolic's decentralized GPU market reduces computing costs by up to 75% while hosting open-source AI models and providing low-latency inference capabilities comparable to Web2 cloud providers.
Hyperbolic further enhances its GPU market and cloud services by launching AgentKit. AgentKit is a powerful interface that allows AI agents full access to Hyperbolic's decentralized GPU network. It features an AI-readable map of computing resources that can scan in real-time and provide details on resource availability, specifications, current load, and performance.
AgentKit opens up a revolutionary future where agents can independently acquire the computing power they need and pay the associated costs.
Through an innovative Proof of Sample verification mechanism, Hyperbolic ensures that every inference interaction in the ecosystem is verified, establishing a foundation of trust for the future world of agents.
However, verification only addresses part of the trust issue for autonomous agents. Another dimension of trust involves privacy protection, which is where TEE (Trusted Execution Environment) infrastructure projects like Phala, Automata, and Marlin excel. For example, proprietary data or models used by these AI agents can be securely protected.
In fact, true autonomous agents cannot fully operate without TEE, as TEE is crucial for protecting sensitive information, such as safeguarding wallet private keys, preventing unauthorized access, and ensuring the security of Twitter account logins.
TEE (Trusted Execution Environment) isolates sensitive data within a protected CPU/GPU enclave during processing. Only authorized program code can access the contents of the enclave, while cloud service providers, developers, administrators, and other hardware components cannot access this area.
The primary use of TEE is to execute smart contracts, especially in DeFi protocols involving more sensitive financial data. Therefore, the integration of TEE with DeFAI includes traditional DeFi application scenarios, such as:
- Transaction Privacy: TEE can hide transaction details, such as sender and receiver addresses and transaction amounts. Platforms like Secret Network and Oasis use TEE to protect transaction privacy in DeFAI applications, enabling privacy-preserving payments.
- MEV Resistance: By executing smart contracts within TEE, block builders cannot access transaction information, thus preventing front-running attacks that generate MEV. Flashbots has developed BuilderNet using TEE, a decentralized block building network that reduces the censorship risks associated with centralized block building. Chains like Unichain and Taiko also use TEE to provide users with a better trading experience.
These features are also applicable to alternative solutions like ZKP or MPC. However, TEE currently executes smart contracts with the highest efficiency among these three solutions, simply because the model is hardware-based.
In terms of agents, TEE provides various capabilities:
- Automation: TEE can create an independent operating environment for agents, ensuring that their strategies are executed without human interference. This guarantees that investment decisions are entirely based on the agent's independent logic.
- TEE also allows agents to control social media accounts, ensuring that any public statements they make are independent and not influenced by external factors, thus avoiding the suspicion of advertising or other promotions. Phala is collaborating with the AI16Z team to enable Eliza to operate efficiently in a TEE environment.
- Verifiability: People can verify whether agents are computing using the promised models and producing valid results. Automata and Brevis are collaborating to develop this functionality.
As more specialized agents with specific use cases (DeFi, gaming, investment, music, etc.) enter the field, better agent collaboration and seamless communication become crucial.
The infrastructure for agent swarm frameworks has emerged to address the limitations of monolithic agents. Swarm intelligence allows agents to work together as a team, pooling their capabilities to achieve common goals. The coordination layer abstracts complexity, making it easier for agents to collaborate under shared objectives and incentive mechanisms.
Several Web3 companies, including Theoriq, FXN, and Questflow, are moving in this direction. Among all these participants, Theoriq, initially launched as ChainML in 2022, has been working the longest to achieve this goal, with a vision to become the universal foundational layer for agent-based AI.
To realize this vision, Theoriq handles agent registration, payments, security, routing, planning, and management at the underlying module level. It also connects supply and demand, providing an intuitive agent-building platform called Infinity Studio that allows anyone to deploy their own agents, along with Infinity Hub (a marketplace where clients can browse all available agents). In its swarm system, meta-agents select the most suitable agents for a given task, creating "swarms" to achieve common goals while tracking reputation and contributions to maintain quality and accountability.
Theoriq tokens provide economic assurance, allowing agent operators and community members to use tokens to represent the quality and trust of agents, thereby incentivizing quality service and deterring malicious behavior. Tokens can also serve as a medium of exchange for paying service fees and accessing data, as well as rewarding participants who contribute data, models, and more.
▲ Source: Theoriq
As discussions around AI Agents gradually become a long-term industry domain, led by clear practical agents, we may witness a revival of Crypto x AI infrastructure projects, bringing strong price performance. These projects have the potential to leverage their venture capital funding, years of R&D experience, and technical expertise in specific fields to expand across the entire value chain. This could enable them to develop their own advanced practical AI Agents capable of surpassing 95% of other agents currently on the market.
The Evolution and Future of DeFAI
I have always believed that the development of the market will be divided into three stages: first, the demand for efficiency; then, decentralization; and finally, privacy. DeFAI will be divided into four stages.
The first stage of DeFi AI will focus on efficiency, improving user experience through various tools to complete complex DeFi tasks without requiring solid protocol knowledge. Examples include:
- AI that can understand user prompts even with imperfect formatting
- Quickly executing swaps within the shortest block time
- Real-time market research to help users make favorable decisions based on their goals
If innovation is achieved, it can help users save time and effort while lowering the barriers to on-chain transactions, potentially creating a "phantom" moment in the coming months.
In the second stage, agents will autonomously trade with minimal human intervention. Trading agents can execute strategies based on third-party opinions or data from other agents, creating a new DeFi model. Professional or mature DeFi users can fine-tune their models to build agents that generate optimal returns for themselves or their clients, thereby reducing the need for manual monitoring.
In the third stage, users will begin to focus on wallet management issues and AI verification, as they will demand transparency. Solutions like TEE and ZKP will ensure that AI systems are tamper-proof, unaffected by third-party interference, and verifiable.
Finally, once these stages are completed, no-code DeFi AI toolkits or AI-as-a-service protocols can create an agent-based economy that trades using models trained on cryptocurrencies.
While this vision is ambitious and exciting, there are still several bottlenecks to be addressed:
- Most current tools are merely repackaged ChatGPT, with no clear benchmarks to identify high-quality projects
- The fragmentation of on-chain data pushes AI models toward centralization rather than decentralization, and it remains unclear how on-chain agents will address this issue.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。