The AI Agent token keeps falling, is it the fault of MCP being too popular?

CN
2 hours ago

The key focus for truly conquering web3 AI Agents should be on how to align the "complex workflows" of AI Agents with the "trust verification flows" of blockchain as closely as possible.

Author: Haotian

Some friends say that the continuous decline of web3 AI Agent projects like #ai16z and arc is caused by the recently popular MCP protocol? At first glance, it seems confusing, but upon reflection, there is indeed some logic: the valuation pricing logic for existing web3 AI Agents has changed, and the narrative direction and product implementation roadmap need urgent adjustment! Here are my personal views:

1) MCP (Model Context Protocol) is an open-source standardized protocol aimed at seamlessly connecting various AI LLMs/Agents to different data sources and tools, akin to a plug-and-play USB "universal" interface, replacing the previously required end-to-end "specific" packaging method.

In simple terms, there were significant data silos between AI applications, and for Agents/LLMs to achieve interoperability, they needed to develop their respective API interfaces, which complicated the operational process and lacked bidirectional interaction functionality, typically having relatively limited model access and permission restrictions.

The emergence of MCP provides a unified framework that allows AI applications to escape the past state of data silos, enabling "dynamic" access to external data and tools, significantly reducing development complexity and integration efficiency, especially in areas like automated task execution, real-time data querying, and cross-platform collaboration.

At this point, many people immediately think of whether integrating the Manus framework, which promotes multi-Agent collaboration, with the MCP open-source framework would make it invincible?

Indeed, Manus + MCP is the key to the recent impact on web3 AI Agents.

2) However, it is astonishing that both Manus and MCP are frameworks and protocol standards aimed at web2 LLMs/Agents, addressing the issues of data interaction and collaboration between centralized servers. Their permissions and access control still rely on the "active" opening of each server node, in other words, they are merely open-source tools.

Logically, this completely deviates from the core ideas pursued by web3 AI Agents, such as "distributed servers, distributed collaboration, distributed incentives." How can centralized artillery blow up decentralized fortresses?

The reason lies in the fact that the first phase of web3 AI Agents has become too "web2-like." On one hand, many teams come from a web2 background and lack a full understanding of the native needs of web3. For example, the ElizaOS framework was initially a packaging framework that helped developers quickly deploy AI Agent applications, integrating platforms like Twitter, Discord, and some APIs from OpenAI, Claude, DeepSeek, etc., while appropriately packaging some Memory and Character general frameworks to assist developers in quickly developing AI Agent applications. But if we get serious, what is the difference between this service framework and web2 open-source tools? What differentiating advantages does it have?

Uh, is the advantage just having a set of Tokenomics incentive mechanisms? Then using a framework that can be completely replaced by web2 to incentivize a batch of AI Agents that exist mainly for issuing new tokens? Scary… Following this logic, you can roughly understand why Manus + MCP can impact web3 AI Agents.

Since many web3 AI Agent frameworks and services only address the rapid development and application needs similar to web2 AI Agents, but cannot keep up with the innovation speed of web2 in terms of technical services, standards, and differentiating advantages, the market/capital has revalued and repriced the previous batch of web3 AI Agents.

3) At this point, the general problem should have found its crux, but how to break the deadlock? There is only one way: focus on creating web3 native solutions, as the operation and incentive architecture of distributed systems are the absolute differentiating advantages of web3.

Taking distributed cloud computing power, data, algorithms, and other service platforms as an example, on the surface, it seems that this computing power and data aggregated under the guise of idle resources cannot meet the needs of engineering innovation in the short term. However, when a large number of AI LLMs are competing for centralized computing power to achieve performance breakthroughs, a service model that touts "idle resources, low cost" will naturally be looked down upon by web2 developers and VC teams.

But once web2 AI Agents have passed the performance innovation stage, they will inevitably pursue vertical application scenario expansion and fine-tuning model optimization, at which point the advantages of web3 AI resource services will truly emerge.

In fact, when web2 AI, which has climbed to a giant position through resource monopolization, reaches a certain stage, it becomes difficult to revert to the idea of surrounding cities with rural areas, breaking through each segmented scenario one by one. That will be the time when surplus web2 AI developers and web3 AI resources join forces.

In fact, besides the web2 quick deployment + multi-Agent collaboration communication framework + Tokenomic narrative, there are many web3 native innovative directions worth exploring for web3 AI Agents:

For example, equipping a decentralized consensus collaboration framework, considering the characteristics of LLM large model off-chain computation + on-chain state storage, requires many adaptable components.

  1. A decentralized DID identity verification system that allows Agents to have verifiable on-chain identities, similar to the unique addresses generated by virtual machines for smart contracts, primarily for subsequent state tracking and recording;

  2. A decentralized Oracle system responsible for the trustworthy acquisition and verification of off-chain data. Unlike previous Oracles, this Oracle adapted for AI Agents may need to create a combination architecture of multiple Agents, including data collection layer, decision consensus layer, and execution feedback layer, to ensure that the data required by Agents on-chain and off-chain computation and decision-making can be accessed in real-time;

  3. A decentralized storage DA system, due to the uncertainty of the knowledge base state during AI Agent operation and the temporary nature of the reasoning process, requires a system to record and store the key state library and reasoning paths behind LLMs in a distributed storage system, providing a cost-controllable data proof mechanism to ensure data availability during public chain verification;

  4. A zero-knowledge proof (ZKP) privacy computing layer that can link with privacy computing solutions including TEE and FHE, achieving real-time privacy computing + data proof verification, allowing Agents to have a broader range of vertical data sources (medical, financial), leading to more specialized customized service Agents emerging on top;

  5. A cross-chain interoperability protocol, somewhat similar to the framework defined by the MCP open-source protocol, but this interoperability solution requires a relay and communication scheduling mechanism that adapts to Agent operation, transmission, and verification, capable of completing asset transfer and state synchronization issues for Agents across different chains, especially including complex states such as Agent context, prompts, knowledge bases, Memory, etc.;

……

In my view, the true focus for conquering web3 AI Agents should be on how to align the "complex workflows" of AI Agents with the "trust verification flows" of blockchain as closely as possible. As for these incremental solutions, whether they are upgraded iterations from existing narrative projects or newly forged projects in the AI Agent narrative track, both possibilities exist.

This is the direction that web3 AI Agents should strive to build, aligning with the fundamental innovation ecosystem under the macro narrative of AI + Crypto. If there cannot be relevant innovative breakthroughs and the establishment of differentiated competitive barriers, then every slight movement in the web2 AI track could potentially disrupt the web3 AI landscape.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink