Is the continuous decline of AI Agent caused by the recently popular MCP protocol?

CN
链捕手
Follow
5 hours ago

Author: Haotian

Some friends say that the continuous decline of web3 AI Agent targets such as #ai16z and $arc is caused by the recently popular MCP protocol? At first glance, it sounds confusing, like, WTF does it have anything to do with it? But upon further reflection, there is indeed some logic: the valuation pricing logic of existing web3 AI Agents has changed, and the narrative direction and product implementation roadmap need urgent adjustment! Below, I will share my personal views:

1) MCP (Model Context Protocol) is an open-source standardized protocol aimed at enabling various AI LLMs/Agents to seamlessly connect to various data sources and tools, akin to a plug-and-play USB "universal" interface, replacing the previously required end-to-end "specific" packaging method.

In simple terms, there were significant data silos between AI applications, and for Agents/LLMs to achieve interoperability, they needed to develop corresponding API interfaces, which not only complicated the operational process but also lacked bidirectional interaction capabilities, typically having relatively limited model access and permission restrictions.

The emergence of MCP provides a unified framework that allows AI applications to break free from the past data silo state, enabling "dynamic" access to external data and tools, significantly reducing development complexity and integration efficiency, especially in areas such as automated task execution, real-time data querying, and cross-platform collaboration.

At this point, many people immediately think of whether integrating the Manus framework, which promotes multi-Agent collaboration innovation, with the MCP open-source framework that facilitates multi-Agent collaboration would make it invincible?

That's right, Manus + MCP is the key to the recent impact on web3 AI Agents.

2) However, it is astonishing that both Manus and MCP are frameworks and protocol standards aimed at web2 LLMs/Agents, addressing the issues of data interaction and collaboration between centralized servers. Their permissions and access control still rely on the "active" opening of each server node, in other words, they are merely open-source tools.

Logically, this completely deviates from the core ideas pursued by web3 AI Agents, such as "distributed servers, distributed collaboration, distributed incentives." How can centralized artillery blow up decentralized fortresses?

The reason lies in the fact that the first phase of web3 AI Agents has become too "web2-like." On one hand, many teams come from a web2 background and lack a full understanding of the native needs of web3. For example, the ElizaOS framework was initially a packaging framework that helped developers quickly deploy AI Agent applications, integrating platforms like Twitter, Discord, and some APIs from OpenAI, Claude, DeepSeek, etc., while appropriately packaging some Memory and Character general frameworks to assist developers in quickly developing AI Agent applications. But to be precise, what is the difference between this service framework and web2 open-source tools? What differentiated advantages does it have?

Uh, is the advantage just a set of Tokenomics incentive mechanisms? Then using a framework that can be completely replaced by web2 to incentivize a batch of AI Agents that exist mainly for issuing new tokens? Scary… Following this logic, you can roughly understand why Manus + MCP can impact web3 AI Agents.

Since many web3 AI Agent frameworks and services only address the quick development and application needs similar to web2 AI Agents, but cannot keep up with the innovation speed of web2 in terms of technical services, standards, and differentiated advantages, the market/capital has revalued and repriced the previous batch of web3 AI Agents.

3) At this point, the general problem should have found its crux, but how to break the deadlock? There is only one way: focus on creating web3 native solutions, as the operation of distributed systems and incentive architectures is the absolute differentiated advantage of web3.

Taking distributed cloud computing power, data, algorithms, and other service platforms as an example, on the surface, it seems that this computing power and data aggregated under the guise of idle resources cannot meet the needs for engineering innovation in the short term. However, when a large number of AI LLMs are competing for centralized computing power to achieve performance breakthroughs, a service model that touts "idle resources, low cost" will naturally be looked down upon by web2 developers and VC teams.

But once web2 AI Agents pass the stage of competing for performance innovation, they will inevitably pursue vertical application scenario expansion and fine-tuning model optimization. At that time, the advantages of web3 AI resource services will truly emerge.

In fact, when web2 AI, which has climbed to a giant position through resource monopolization, reaches a certain stage, it becomes difficult to revert to the idea of surrounding cities with rural areas, breaking through each segmented scenario one by one. That will be the time for the surplus of web2 AI developers and web3 AI resources to join forces.

In fact, besides the quick deployment + multi-Agent collaboration communication framework from web2 + Tokenomic narrative, there are many web3 native innovative directions worth exploring for web3 AI Agents:

For example, equipping a decentralized DID identity verification system, allowing Agents to have verifiable on-chain identities, similar to the unique addresses generated by virtual machines for smart contracts, primarily for subsequent state tracking and recording;

A decentralized Oracle system, mainly responsible for the trustworthy acquisition and verification of off-chain data. Unlike previous Oracles, this Oracle adapted for AI Agents may need to create a combination architecture of multiple Agents, including data collection layers, decision consensus layers, and execution feedback layers, to ensure that the data required by Agents on-chain and off-chain calculations and decisions can be reached in real-time;

A decentralized storage DA system, due to the uncertainty of the knowledge base state during AI Agent operation and the temporary nature of the reasoning process, a system is needed to record and store the key state library and reasoning paths behind LLMs in a distributed storage system, providing a cost-controllable data proof mechanism to ensure data availability during public chain verification;

A zero-knowledge proof (ZKP) privacy computing layer that can link with privacy computing solutions including TEE and FHE, achieving real-time privacy computing + data proof verification, allowing Agents to have broader vertical data sources (medical, financial), leading to more specialized customized service Agents emerging on top;

A cross-chain interoperability protocol, somewhat similar to the framework defined by the MCP open-source protocol, with the difference being that this interoperability solution needs to have relays and communication scheduling mechanisms that adapt to Agent operation, transmission, and verification, capable of completing asset transfers and state synchronization of Agents across different chains, especially including complex states such as Agent context, prompts, knowledge bases, Memory, etc.;

……

In my view, the real focus of conquering web3 AI Agents should be on how to align the "complex workflows" of AI Agents with the "trust verification flows" of blockchains as closely as possible. As for whether these incremental solutions come from upgrading and iterating existing narrative projects or are newly forged from projects in the AI Agent narrative track, both possibilities exist.

This is the direction that web3 AI Agents should strive to build, aligning with the fundamental innovation ecosystem under the macro narrative of AI + Crypto. If there is no related innovation exploration and establishment of differentiated competitive barriers, then every stir in the web2 AI track could potentially shake the web3 AI landscape.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink