蓝狐
蓝狐|Mar 21, 2025 03:58
I saw that the public testnet for Mira Network (@ Mira_Channel) was launched yesterday. It attempts to build a trust layer for AI. So, why does AI need to be trusted? How did Mira solve this problem? When people discuss AI, they pay more attention to the powerful aspects of AI capabilities. However, it is interesting that AI has "illusions" or biases. People don't pay much attention to this matter. What is the "illusion" of AI? Simply put, AI sometimes "fabricates" and talks nonsense seriously. For example, if you ask AI why the moon is pink? It may give you many seemingly reasonable explanations in a serious manner. The existence of "illusions" or biases in AI is somewhat related to some current AI technology paths, such as generative AI predicting the "most likely" to output content in order to achieve coherence and rationality, but sometimes unable to verify authenticity; In addition, the training data itself also contains errors, biases, and even fictional content, which can affect the output of AI. That is to say, AI itself learns human language patterns rather than facts themselves In summary, the current probability generation mechanism and data-driven model almost inevitably bring the possibility of AI illusion. If this biased or hallucinatory output is just ordinary knowledge or entertainment content, it will not have direct consequences for the time being, but if it occurs in highly rigorous fields such as healthcare, law, aviation, finance, etc., it will directly have significant consequences. Therefore, how to address AI illusions and biases is one of the core issues in the evolution of AI. Some use retrieval enhanced generation techniques (combined with real-time databases to prioritize outputting verified facts), while others introduce human feedback and correct model errors through manual annotation and human supervision. The Mira (@ Mira_network) project is also attempting to address the issue of AI bias and illusion, that is, Mira is trying to build a trust layer for AI, reduce AI bias and illusion, and improve AI reliability. So, in terms of the overall framework, how does Mira reduce AI bias and illusion, and ultimately achieve trustworthy AI? The core of Mira (@ MiraN_Network) to achieve this is to validate AI output through consensus among multiple AI models. That is to say, Mira itself is a validation network that verifies the reliability of AI output, and it relies on the consensus of multiple AI models. In addition, another important point is to use decentralized consensus for verification. The key to Mira network (@ Mira-N network) is decentralized consensus verification. Decentralized consensus verification is a specialty in the field of encryption, and it also utilizes the synergy of multiple models to reduce bias and illusion through collective verification patterns. In terms of verification architecture, it requires an independently verifiable declaration, and the Mira protocol supports converting complex content into independent verification declarations. These statements require the participation of node operators for verification. In order to ensure the honesty of node operators, encrypted economic incentives/penalties will be used here, with the participation of different AI models and decentralized node operators to ensure the reliability of verification results. Mira's network architecture includes content conversion, distributed verification, and consensus mechanisms to achieve reliable verification. In this architecture, content conversion is an important part. The Mira network first decomposes candidate content (usually submitted by clients) into different verifiable statements (to ensure that the model can be understood in the same context), which are distributed by the system to nodes for verification to determine the validity of the statements and summarize the results to reach consensus. These results and consensus will be returned to the client. In addition, to protect customer privacy, candidate content conversion is decomposed into declaration pairs, which are randomly fragmented and given to different nodes to prevent information leakage during the verification process. The node operator is responsible for running the validator model, processing statements, and submitting validation results. Why are node operators willing to participate in the verification of statements? Because it can generate profits. Where does the profit come from? From the value created for customers. The purpose of Mira network is to reduce the error rate (illusion and bias) of AI. Once the goal can be achieved, it can generate value, such as reducing error rates in fields such as healthcare, law, aviation, finance, etc., which will generate enormous value. Therefore, customers are willing to pay. Of course, the sustainability and scale of payment depend on whether the Mira network can continue to bring value to customers (reducing AI error rates). In addition, to prevent opportunistic behavior of nodes responding randomly, nodes that continue to deviate from consensus will have their pledged tokens reduced. In short, it is to ensure the honest participation of node operators in verification through the game of economic mechanisms. Overall, Mira provides a new solution for achieving the reliability of AI, which is to build a decentralized consensus verification network based on multiple AI models, bringing higher reliability to customers' AI services, reducing AI bias and illusions, and meeting customers' needs for higher accuracy and precision. And on the basis of providing value to customers, bring benefits to the participants of Mira network. If summarized in one sentence, it is Mira's attempt to build a trust layer for AI. This plays a driving role in enhancing the depth of AI applications. At present, Mira's AI agent frameworks include ai16z, ARC, and others. The public testing network of Mira Network was launched yesterday. Users can participate in the Mira public testing network by using Klok (@ klok_mapp), which is a Mira based LLM chat application. Using the Klok application, users can experience verified AI output (compare it with unverified AI output) and earn Mira points. As for the future use of the points, it has not been revealed yet.
+6
Mentioned
Share To

Timeline

HotFlash

APP

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads