陈剑Jason 🐡
陈剑Jason 🐡|Mar 21, 2025 11:36
Mira's public testing website has been launched. In summary, what Mira does is to solve the problem of many AI serious nonsense, namely AI illusion. When people often use AI to search for information, they may find that AI may return some misleading results or even fabricate non-existent things. However, if you don't carefully review the characters, events, and data inside, you won't be able to see any errors at all, because the AI's reasoning model will make the answers very smooth in language logic, that is, serious nonsense. The reason for AI illusion is that the data of the model is not comprehensive and the training is incomplete, which leads to AI forcibly finding or even fabricating some possible content to fill in the gaps when answering questions through reasoning. Therefore, the problem of AI illusion has become a bottleneck in making AI a productivity tool. After all, you cannot fully trust the results AI gives you. I asked ChatGPT, DeepSeek, and Grok respectively whether there may be illusions in their answers, and they all admitted that they may have such a situation. So, if it were you, how would you design a system to solve the illusion of AI? It is known that AI's answers may make mistakes and lead to "wrongdoing", so a system is needed to supervise and check whether AI's answers are incorrect, and incentives need to be given to those who successfully detect them. Do you sound very familiar with this idea? Isn't this the challenge model of the current blockchain consensus mechanism? Nodes may engage in malicious behavior, and other nodes need to verify their work. For nodes that succeed in the challenge, incentives will be given, while malicious nodes will have their deposits confiscated. So similarly, the mechanism of blockchain to solve node wrongdoing can be applied to AI models to solve illusion problems. So Mira is a decentralized AI verification system that uses both POW and POS mechanisms, which is actually very easy to understand because the process of AI model calculation is POW. Therefore, there will be multiple POW AI models in the Mira network for calculation, responsible for answering and checking questions. AI models need to participate in the network through POS staking because they need to connect the work of various models through economic models of fines and incentives. Although it has not yet been launched on the main network, it has already launched a unified interface SDK that integrates multiple language models. It can automatically route input content to different models for load balancing, track usage, and correct errors to a certain extent. Currently, many products are using Mira's SDK, which is officially claimed to process over 2 billion characters per day and make 5 million API calls per day. At present, the seed round has raised 9 million US dollars. Compared to what we are doing and projects that often raise tens of millions of US dollars, the current valuation is still very low. The institutions are all good, led by Framework Ventures. Stay tuned 🧐
+5
Mentioned
Share To

Timeline

HotFlash

APP

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads