The closer AI gets to human intelligence, the more it needs a non-human defense system.
Written by: 0xResearcher
Manus has achieved SOTA (State-of-the-Art) results in the GAIA benchmark test, demonstrating its performance surpasses that of Open AI's models at the same level. In other words, it can independently complete complex tasks, such as multinational business negotiations, which involve breaking down contract terms, predicting strategies, generating proposals, and even coordinating legal and financial teams. Compared to traditional systems, Manus's advantages lie in its dynamic goal decomposition capability, cross-modal reasoning ability, and memory-enhanced learning capability. It can break down large tasks into hundreds of executable subtasks, handle various types of data simultaneously, and continuously improve its decision-making efficiency and reduce error rates through reinforcement learning.
While marveling at the rapid development of technology, Manus has once again sparked a divergence of opinions within the community regarding the evolutionary path of AI: will the future be dominated by AGI or by MAS collaboration?
This can be traced back to the design philosophy of Manus, which implies two possibilities:
One is the AGI path. By continuously enhancing the level of individual intelligence, it approaches human comprehensive decision-making capabilities.
The other is the MAS path. As a super coordinator, it directs thousands of vertical domain agents to work together.
On the surface, we are discussing different path divergences; in reality, we are discussing the underlying contradictions in AI development: how should efficiency and safety be balanced? As individual intelligence approaches AGI, the risk of decision-making becoming a black box increases; while multi-agent collaboration can disperse risks, it may miss critical decision windows due to communication delays.
The evolution of Manus inadvertently amplifies the inherent risks of AI development. For example, the data privacy black hole: in medical scenarios, Manus needs real-time access to patient genomic data; during financial negotiations, it may touch on undisclosed corporate financial information; for instance, the algorithmic bias trap, where Manus offers below-average salary suggestions for candidates of specific ethnicities during recruitment negotiations; and during legal contract reviews, the misjudgment rate for emerging industry terms is nearly half. Additionally, there are vulnerabilities to adversarial attacks, where hackers implant specific audio frequencies, causing Manus to misjudge the opponent's bidding range during negotiations.
We must confront a terrifying pain point of AI systems: the more intelligent the system, the broader the attack surface.
However, security is a term that has been repeatedly mentioned in web3. Under the framework of Vitalik Buterin's impossible triangle (blockchain networks cannot simultaneously achieve security, decentralization, and scalability), various encryption methods have emerged:
- Zero Trust Security Model: The core idea of the Zero Trust Security Model is "trust no one, always verify," meaning that no device should be trusted by default, regardless of whether it is on the internal network. This model emphasizes strict identity verification and authorization for every access request to ensure system security.
- Decentralized Identity (DID): DID is a set of identifier standards that allows entities to obtain identification in a verifiable and persistent manner without a centralized registry. This achieves a new decentralized digital identity model, often compared with self-sovereign identity, and is an important component of Web3.
- Fully Homomorphic Encryption (FHE) is an advanced encryption technology that allows arbitrary computations to be performed on encrypted data without decrypting it. This means that third parties can operate on ciphertext, and the results obtained will be consistent with those obtained by performing the same operations on plaintext after decryption. This feature is significant for scenarios that require computation without exposing original data, such as cloud computing and data outsourcing.
Both the Zero Trust Security Model and DID have seen a number of projects tackling challenges during multiple bull markets; some have succeeded, while others have been submerged in the waves of cryptocurrency. As the youngest encryption method, Fully Homomorphic Encryption (FHE) is also a powerful tool for addressing security issues in the AI era. Fully Homomorphic Encryption (FHE) is a technology that allows computations to be performed on encrypted data.
How to solve this?
First, at the data level. All information input by users (including biometric features, voice tone) is processed in an encrypted state, and even Manus itself cannot decrypt the original data. For example, in a medical diagnosis case, patient genomic data is analyzed entirely in ciphertext form, avoiding the leakage of biological information.
At the algorithm level. Through "encrypted model training" enabled by FHE, even developers cannot peek into the AI's decision-making path.
At the collaboration level. Multiple agents communicate using threshold encryption, so that the compromise of a single node does not lead to global data leakage. Even in supply chain attack and defense drills, after attackers infiltrate multiple agents, they still cannot obtain a complete business view.
Due to technical limitations, web3 security may not have a direct connection with most users, but it has countless indirect interests involved. In this dark forest, if one does not arm themselves, they will never escape the identity of "retail investors."
- uPort was launched on the Ethereum mainnet in 2017 and may be one of the earliest decentralized identity (DID) projects released on the mainnet.
- In terms of the Zero Trust Security Model, NKN launched its mainnet in 2019.
- Mind Network is the first FHE project to go live on the mainnet and has been the first to pass collaborations with ZAMA, Google, DeepSeek, and others.
uPort and NKN are projects that the author has never heard of, suggesting that security projects are indeed not attracting the attention of speculators. Whether Mind Network can escape this curse and become a leader in the security field remains to be seen.
The future is here. The closer AI gets to human intelligence, the more it needs a non-human defense system. The value of FHE lies not only in solving current problems but also in paving the way for the era of strong AI. On this treacherous path to AGI, FHE is not an option but a necessity for survival.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。