HyperAGI is the first AI rune HYPERAGIAGENT community-driven decentralized AI project.
Please introduce the background of the HyperAGI team and project
HyperAGI is the first AI rune HYPER·AGI·AGENT community-driven decentralized AI project. The HyperAGI team has been deeply involved in the field of AI for many years and has accumulated extensive experience in Web3 generative AI applications. The HyperAGI team began using generative AI to create 2D images and 3D models three years ago and built an open-world MOSSAI consisting of thousands of AI-generated islands on the blockchain, proposing the NFG non-fungible encrypted asset standard generated by AI. However, at that time, decentralized solutions for AI model training and generation had not yet been established, and relying solely on the platform's GPU resources could not support a large number of users, leading to limited growth. With the surge in public interest in AI ignited by LLM, we launched the HyperAGI decentralized AI application platform, which began testing on Ethereum and Bitcoin L2 in Q1 2024.
HyperAGI focuses on decentralized AI applications and aims to cultivate a self-governing cryptocurrency economy, with the ultimate goal of establishing Unconditional Basic AI Income (UBAI). It inherits the strong security and decentralization of Bitcoin and enhances it through innovative Proof of Useful Work (PoUW) consensus mechanism.
Consumer-grade GPU nodes can join the network without permission and mine local tokens $HYPT by performing PoUW tasks such as AI reasoning and 3D rendering.
Users can develop identity-proof (PoP) AGI agents driven by large language models (LLM) using various tools. These agents can be configured as chatbots or 3D/XR entities in the metaverse. AI developers can instantly use or deploy LLM AI microservices, promoting the creation of programmable, autonomous on-chain intelligent agents.
These programmable intelligent agents can issue or own cryptocurrency assets and operate or trade continuously, contributing to a vibrant, autonomous cryptocurrency economy to support the realization of UBAI. Users holding the HYPER·AGI·AGENT rune tokens are eligible to create a PoP intelligent agent on the Bitcoin layer 1 chain and may soon qualify for intelligent agent basic benefits.
What is an AI agent? Many AI projects claim to support agents, what exactly is an agent? How is the HyperAGI agent different from other agents?
AI agents are not a new concept in academia, but the current market hype has made the concept of agents increasingly confusing. The HyperAGI agent refers to: LLM-driven embodied agents that can be trained and interact with users in a 3D virtual simulation environment, not just chatbots driven by LLM. The agents in HyperAGI can exist in both the virtual digital world and the real physical world. HyperAGI agents are currently being integrated into physical robots, such as robotic dogs, drones, and humanoid robots. In the future, corresponding agents trained in the virtual 3D world can be downloaded into physical robots after enhanced training to better perform tasks.
In addition, ownership of HyperAGI agents belongs entirely to the users, with socio-economic significance. PoP (Proof of Personhood) agents representing individual users can receive token-based basic income in the HyperAGI agent economic system, incentivizing user participation in the training and interaction of their PoP agents and accumulating data that can prove individual human identity. UBAI also reflects AI equality and democracy.
Is AGI just a gimmick or will it soon become a reality? How does the research and development roadmap of HyperAGI differ from other AI projects?
Although the definition of AGI is not yet unified, for decades, AGI has been seen as the holy grail of AI academia and industry. Transfer-based LLM has begun to become the core of various AI agents and AGI, but this is not the view within HyperAGI. While LLM does provide novel and convenient information extraction and natural language-based planning and reasoning capabilities, it is fundamentally a data-driven deep neural network. Several years ago, during the big data wave, we knew that such systems are definitely GIGO (Garbage in, garbage out). LLM lacks some features essential for advanced intelligence, such as, at a lower level, due to the lack of embodiment, such AI or agents struggle to understand the world model of human users and are even more challenged to plan for the environment and take action to solve real-world problems. At a higher level, LLM lacks self-awareness, reflection, and introspection, among other advanced intellectual activities.
Our founder, Landon Wang, has deep and long-term research in the field of AI. In 2004, he proposed Aspect-Oriented AI (AOAI) for artificial neural networks, which is an innovation that combines neural-inspired computing with AOP aspect-oriented programming. An aspect refers to an encapsulation of relationships or constraints among multiple objects, for example, a neuron is an encapsulation of relationships or constraints with other multiple cells (aspects). Specifically, a neuron communicates with sensory or motor cells through fibers extending from the neuron cell body, so a neuron encapsulates the relationships and logic, and even each AI agent solves a specific aspect of a problem, technically, it can be modeled using an aspect.
In the software implementation of artificial neural networks, neurons or layers are typically modeled as objects, which is easy to understand and maintain in object-oriented programming languages, but this makes it difficult to adjust the topology of the neural network, and the activation sequence of neurons is relatively fixed. Although it has shown tremendous power in completing simple high-intensity calculations, such as LLM training and reasoning, its performance in flexibility and adaptability is unsatisfactory. On the other hand, neurons or layers in AOAI artificial neural networks are modeled as aspects rather than objects. This architecture of neural networks has strong self-adaptation and flexibility, making the self-evolution of neural networks possible.
HyperAGI combines efficient LLM with evolvable AOAI, forming a path to feasible AGI that combines the efficiency of traditional artificial neural networks and the self-evolution characteristics of AO neural networks.
What is the vision of HyperAGI
The vision of HyperAGI is to achieve Unconditional Basic AI Income (UBAI), build a future where technology serves everyone equally, break the cycle of exploitation, and create a truly decentralized and fair digital society. Unlike some other blockchain projects that only promote UBI, HyperAGI's UBAI has a clear implementation path through the intelligent agent economy, rather than being just a pipe dream. Bitcoin, proposed by Satoshi Nakamoto, was a huge innovation for humanity, but it is just a decentralized digital currency without practical value. The significant leap and rise of artificial intelligence make it possible to create value in a decentralized manner. In this model, people benefit from AI running on machines, rather than from the value of others. A truly encrypted world based on code is emerging, where all machines are created for the benefit and well-being of humanity. In such an encrypted world, there may still be a hierarchy of AI agents, but human exploitation is eliminated because the agents themselves may have some form of autonomy. The ultimate purpose and meaning of artificial intelligence is to serve humanity, as encoded on the blockchain.
What is the relationship between Bitcoin L2 and AI, and why build AI on Bitcoin L2?
- Bitcoin L2 as a means of payment for AI agents
Bitcoin is the epitome of a "neutral medium" and is well-suited for artificial intelligence agents engaged in value transactions. Bitcoin can eliminate the inherent inefficiencies and "friction" of fiat currency. This "digitally native" medium is a natural environment for AI to engage in value exchange. Bitcoin L2 enhances the programmability of Bitcoin, meeting the speed requirements for value exchange by artificial intelligence, making Bitcoin a native currency for artificial intelligence.
- Decentralized AI governance on Bitcoin L2
Given the current centralization trend in AI, decentralized AI alignment and governance have garnered significant attention. The more powerful smart contracts on Bitcoin L2 can serve as rules to regulate the behavior and protocol modes of AI agents, achieving decentralized AI alignment and governance. Additionally, Bitcoin's maximal neutrality makes it easier to achieve consensus on AI alignment and governance.
- Issuance of AI assets on Bitcoin L2
In addition to issuing AI agents as assets on Bitcoin L1, the high-performance Bitcoin L2 can meet the needs of AI agents to issue AI assets, forming the foundation of the intelligent agent economy.
- AI agents as a killer application for Bitcoin and Bitcoin L2
Since its inception, Bitcoin has lacked practical applications beyond being a store of value due to performance reasons. With the transition to L2, Bitcoin will have enhanced programmability. AI agents are generally used to solve real-world problems, so Bitcoin-driven AI agents can be truly applied. The scale and frequency of AI agent usage will become a killer application for Bitcoin and L2. While human economies may not prioritize using Bitcoin as a payment method, the robot economy might. A large number of AI agents tirelessly using Bitcoin for micro-payments 24/7 could lead to a significant increase in demand for Bitcoin in ways that are currently difficult to imagine.
- AI computing can enhance the security of Bitcoin L2
AI computing can complement Bitcoin's Proof of Work (PoW) or even replace PoW with Proof of Useful Work (PoUW), injecting energy into AI agents used for Bitcoin mining while ensuring security. Through L2, AI can make Bitcoin a smart-driven green blockchain, rather than a Proof of Stake (PoS) mechanism similar to Ethereum. Our proposed Hypergraph Consensus is based on 3D/AI computing for PoUW, which will be explained further.
What sets HyperAGI apart from other decentralized AI projects?
The HyperAGI project stands out in the Web3 AI field with distinct differences in vision, solutions, and technology. In terms of solutions, HyperAGI features GPU computing consensus, AI embodiment, and tokenization, making it a decentralized semi-AI semi-financial application. Recently, five essential characteristics that a decentralized AI platform should possess were proposed in academia, and we have briefly compared and contrasted existing decentralized AI-related projects based on these five features.
These five characteristics are:
(i) Verifiability of remotely run AI models
(ii) Usability of publicly available AI models
(iii) Incentivization for AI developers and users
(iv) Global governance of essential solutions in the digital society
(v) No vendor lock-ins
Using these five criteria to assess existing or planned projects, we have summarized and compared them. We believe verifiability is a fundamental characteristic for decentralized AI projects, serving as the foundation for usability, incentivization, governance, and non-binding. Projects that lack verifiability may be decentralized, such as decentralized computing power leasing or data, algorithm, and model markets, but they are not decentralized AI projects.
Projects that may meet verifiability include Giza, Cortex AI, Ofelimos, and Project PAI, while those that may not meet it include GPU computing leasing projects, DeepBrain Chain, EMC, Atheir, IO.NET, CLORE.AI, SingularityNET, Bittensor, AINN, Fetch.ai, oceanprotocol, and algovera.ai.
HyperAGI is a fully decentralized AI protocol based on the Hypergraph PoUW consensus mechanism and a fully decentralized Bitcoin L2 Stack, which will be upgraded to a dedicated Bitcoin AI L2 in the future.
PoUW is used to protect the network in the most secure way while not wasting computing power. All the computing power provided by miners can be used for LLM reasoning and cloud rendering services. The vision of PoUW is that computing power can be used to solve various problems submitted to decentralized networks.
Why now
1. Explosion of LLM and applications
OpenAI's ChatGPT reached 100 million users in just three months, sparking a global frenzy of development, application, and investment in large language models (LLMs). However, so far, the technical development and training of LLMs have been carried out in a highly centralized manner, raising concerns among academia, industry, and the public about the monopolization of AI technology by a few major providers, data privacy breaches, encroachment, and vendor lock-ins by cloud computing companies. These issues are fundamentally due to the control of the current internet and application gateways by centralized platforms, without a network suitable for large-scale AI applications. The AI community has begun to implement some local and decentralized AI projects, with Ollama representing local operation and Petals representing decentralization. Ollama allows small and medium-sized LLMs to run on personal computers or even smartphones through parameter compression or reduced precision, protecting user data privacy and other rights. However, it is clearly unable to support production environments and networked applications. Petals achieves fully decentralized reasoning of LLMs through Bittorrent's Peer2Peer technology. However, Petals lacks consensus and incentive layer protocols and remains confined to a small circle of researchers.
2. LLM-driven intelligent agents
The endorsement of LLMs enables intelligent agents to engage in higher-level reasoning and possess certain planning capabilities. With the help of natural language, multiple intelligent agents can also form social collaborations like humans. Frameworks for LLM-driven intelligent agents have been proposed, such as Microsoft's AutoGen, Langchain, and CrewAI.
Currently, a large number of AI entrepreneurs and developers are focused on LLM-driven intelligent agents and their applications. There is a significant demand for stable and scalable LLM reasoning, but currently, it is mainly achieved through renting GPU reasoning instances from cloud computing companies. In March 2024, NVIDIA released the generative AI microservice platform ai.nvidia.com, including LLM, to meet the huge demand, but it has not been officially launched yet. LLM-driven intelligent agents are as popular as website construction was in the past, but they currently mainly operate in a traditional Web2 mode of collaboration. Intelligent agent developers need to rent GPUs or purchase API support from LLM providers to run these intelligent agents, creating significant friction and hindering the rapid growth of the intelligent agent ecosystem and the value transfer of the intelligent agent economy.
3. Embodied intelligent agent simulation environment
Currently, most intelligent agents can only access and operate on some APIs, or interact with these APIs through code or scripts, writing LLM-generated control instructions or reading external states. General intelligent agents should not only understand and generate natural language but also understand the human world. After appropriate training, they should be able to transfer to robot (e.g., drones, robotic vacuums, humanoid robots, etc.) systems and complete designated tasks. Such intelligent agents are called embodied intelligent agents.
Training embodied intelligent agents requires a large amount of real-world visual data to help the intelligent agents better understand specific environments and the real world, shorten the training and development time of robots, improve training efficiency, and reduce costs. Currently, the simulation environments used to train embodied intelligence are only built and owned by a few companies, such as Microsoft's Minecraft and NVIDIA's Issac Gym, without decentralized environments to meet the training needs of embodied intelligence. Recently, some game engines have begun to focus on artificial intelligence, such as Epic's Unreal Engine, which is starting to promote AI training environments compatible with OpenAI GYM.
4. Bitcoin L2 ecosystem
Although Bitcoin sidechains have existed for many years, they have mainly been used for payments, and the lack of smart contracts cannot support complex on-chain applications. The emergence of EVM-compatible Bitcoin L2 allows Bitcoin to support applications such as decentralized AI through L2. Decentralized AI requires a completely decentralized, compute-centric blockchain network, rather than being limited to increasingly centralized PoS blockchain networks. The introduction of new protocols for Bitcoin-native assets, such as Inscriptions and Runes, makes it possible to establish an ecosystem and applications based on Bitcoin. For example, the Rune HYPER•AGI•AGENT completed fair minting within an hour, and in the future, HyperAGI will issue more AI assets and community-driven applications on Bitcoin.
Discussing the technical framework and solutions of HyperAGI
1. How to achieve a decentralized LLM-driven AI intelligent agent application platform?
The biggest challenge of decentralized AI now is how to achieve remote reasoning of large AI models and training and reasoning of embodied intelligent agents with high performance and low overhead verifiable algorithms. Without verifiability, the system can only degrade into a traditional multi-party market model of supply and demand, unable to achieve a fully decentralized AI application platform.
Verifiable AI computing requires a PoUW consensus algorithm. Based on this, an incentive mechanism for decentralization can be achieved. Specifically, in network incentives, the mint call of tokens is completed by nodes after completing the computing task and submitting verifiable results on their own, rather than transferring tokens to nodes in any centralized manner.
To achieve verifiable AI computing, it is necessary to define AI computing. AI computing has many levels, such as machine instructions, CUDA instructions, C++, Python language, and 3D computing required for training embodied intelligence also has different levels, such as shader language, OpenGL, C++, and blueprint scripts.
The PoUW consensus algorithm of HyperAGI is implemented by a computational graph, which is defined as a directed graph where nodes correspond to mathematical operations. A computational graph is a way to express and evaluate mathematical expressions, serving as a "language" for describing equations, containing nodes (variables) and edges (operations (simple functions)).
1.1 Verifiable computations of any kind (such as 3D and AI computations) are defined using computational graphs. Different levels of computation can be represented using subgraphs, covering various types of computation and expressing different levels of computation through subgraphs. Currently, there are two levels, with the top-level computational graph deployed on the chain for verification nodes to verify.
1.2 LLM models and 3D scene levels are loaded and run in a fully decentralized manner. When a user accesses LLM models for reasoning or enters a 3D scene for rendering, a trusted node is started by the HyperAGI intelligent agent to run the same hypergraph (LLM or 3D scene).
1.3 If a verification node finds that the result submitted by a node is inconsistent with the result submitted by the trusted node, a binary search is performed on the off-chain computation results of the second-level computational graph (subgraph) to locate the subgraph computation node (operator) where the discrepancy occurred. The subgraph operator has been pre-deployed to the smart contract, and the smart contract is called with the parameters of the inconsistent operator to execute the computation and verify the result.
2. How to avoid excessive computing overhead?
Another challenge of verifiable AI computing is controlling additional computing overhead. In Byzantine consensus protocols, consensus is formed when 2/3 of the nodes agree, which means that all nodes need to complete the same computation for AI reasoning. Such additional overhead is unacceptable waste in AI computing. HyperAGI only requires 1- m nodes to perform additional computation for verification.
2.1 Each LLM does not reason alone; the HyperAGI intelligent agent will start at least one trusted node for "companion computing."
Because LLM reasoning computation involves deep neural networks in the model and the calculation results of each layer and the previous layer as inputs for computation until reasoning is completed, multiple users can concurrently access the same large LLM.
Therefore, the number of trusted nodes required for additional computation is at most equal to the number of LLMs, m. At least, only one trusted node is required for "companion computing."
2.2 3D scene rendering computation is similar. Each user entering a scene activates a hypergraph, and the HyperAGI intelligent agent will start a trusted node according to the hypergraph to perform the corresponding hypergraph computation. If m users enter different 3D scenes, at most m "companion computing" trusted nodes are started.
In summary, the number of nodes participating in additional computation is less than or equal to n+m, a random number greater than or equal to 1, following a Gaussian distribution. n is the number of users entering 3D scenes, and m is the number of LLMs. This effectively avoids resource waste and ensures network verification efficiency.
How does AI combine with Web3 to form semi-AI semi-financial applications?
AI developers can deploy intelligent agents as smart contracts, and the contract contains top-level hypergraph on-chain data. Users or other intelligent agents can call methods of the intelligent agent contract and pay the corresponding tokens. The intelligent agent providing the service will inevitably complete the corresponding computation and submit verifiable results. Users or other intelligent agents then engage in decentralized business interactions with the intelligent agent.
Intelligent agents do not have to worry about not receiving tokens after completing business, and payers do not have to worry about paying tokens without receiving correct business computation results. The ability and value of the intelligent agent business itself are determined by the secondary market price and market value of intelligent agent assets (including ERC 20, ERC 721, or 1155 NFT).
Of course, the application of HyperAGI is not limited to semi-AI semi-financial applications but aims to achieve UBAI, building a future that serves everyone equally in terms of technology, breaking the cycle of exploitation, and creating a truly decentralized and fair digital society.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。