
Phyrex|Feb 06, 2025 08:44
I had a long meeting with @ DeAgentAI today, mainly discussing the differences between AI agents in the cryptocurrency industry and AI in traditional fields. In fact, even the well-known AI applications in traditional AI are in several aspects, such as autonomous driving, language model-based interaction, and AI drawing and animation, which are relatively easy to discover in the civilian field.
In the field of cryptocurrency, Ai is more about concepts. Recently, I blocked a large number of "Ai pornographic dialogue" advertisements every day, which are basically the pornographic version of ChatGPT. Dialogue types are also commonly seen in the cryptocurrency field. Another difference is LUMO, a low-level AI application that provides developers with datasets. Of course, there is also the type of "AI" that automatically tweets, and behind this type of AI, there is still some manual screening.
The representative ones in the current cryptocurrency field are Virtuals and ai16z. The application of Virtuals is still more focused on "interaction", and the shadow of traditional ChatGPT can still be seen. The innovative model of ai16z is very good, which uses AI to achieve investment, but the effectiveness is still debatable. And it is still unknown whether it is suitable for ordinary users to use.
So in the category that can actually help users with transactions, we have not yet seen real landing products, whether it is for making trading decisions, providing trading assistance, or even tools for real AI analysis. If we really say that AI in the cryptocurrency industry has a future, then this kind of AI will definitely be welcomed. In fact, quantification is a kind of "AI", the difference is that the strategy itself still requires human intervention, not "AI"'s own choice.
There are several main reasons why these problems that even a layman like me can think of have not been well solved.
1. Lack of consensus (Multi Persona Dilemma)
This is easy to understand, that is, the answers we give to the questions asked by AI may be completely different, indicating that AI itself does not have a consent and clear intention, which makes it easy to give different directions when making transactions, leading to investors not being able to fully trust it.
2. Fragmented Identity
The possibility of multiple paths or outcomes can lead to AI outputting completely opposite answers to the same problem at the same time, disrupting the trust mechanism of the governance system. For example, blockchain uses a single chain structure to avoid similar conflicts. At any given moment, the Agent's state chain only accepts unique state updates, ensuring that a unique result is output at the same time.
3. Systemic Memory Disorder
If the first two are still relatively vague, I have encountered the third problem myself. I have mentioned before that I want to use ChatPGT to organize my own trading model, but one of the main reasons why I cannot do it is that the data that Ai can store is very limited, and it is impossible to store all of my articles in recent years, so I cannot make a model based on these articles. AI lacks traceable long-term memory and cannot establish a continuous optimization decision chain.
The above three are all tools that can help AI launch real transactions or analysis. Of course, identifying and solving problems, knowing where the problem lies, and then finding solutions are all necessary. Although there is still no 100% product that can be implemented, the Lobe (Intent Consensus Engine) - Memory (Decentralized Memory Network) - Tools (LMT) architecture can already achieve a "Decision Memory Execution" loop and build a Web3 native AI Agent infrastructure. This framework can be widely used in multiple professional fields, including on chain AI optimization, DeFi automated trading strategies, data management, and more.
Share To
Timeline
HotFlash
APP
X
Telegram
CopyLink