Besides making money and storytelling, what else can Crypto do for AI?

CN
15 hours ago

In the field of AI, many fundamental issues can be addressed through cryptographic technology.

Author: Pavel Paramonov

Compiled by: Deep Tide TechFlow

The founder of Curve Finance, @newmichwill, recently stated in a tweet that the main purpose of cryptocurrency lies in DeFi (decentralized finance), and that AI (artificial intelligence) fundamentally does not need cryptocurrency. While I agree that DeFi is an important component of the crypto space, I do not share the view that AI does not need cryptocurrency.

With the rise of AI agents, many of these agents often come with a token, leading people to mistakenly believe that the intersection of cryptocurrency and AI is merely these AI agents. Another important topic that is often overlooked is "decentralized AI," which is closely related to the training of AI models themselves.

My dissatisfaction with certain narratives lies in the fact that most users blindly assume that something must be important and useful when it is popular, and even worse, they believe that the sole purpose of these narratives is to extract value as much as possible (in other words, to make money).

When discussing decentralized AI, we should first ask ourselves: Why does AI need decentralization? And what consequences will this bring?

It turns out that the concept of decentralization is almost always inevitably linked to the idea of "incentive alignment."

In the field of AI, many fundamental issues can be solved through cryptographic technology, and there are mechanisms that not only address existing problems but also add more credibility to AI.

So, why does AI need cryptocurrency?

1. High computational costs limit participation and innovation

Whether fortunate or not, large AI models require substantial computational resources, which naturally limits the participation of many potential users. In most cases, AI models require vast amounts of data resources and actual computational power, which are almost impossible for an individual to bear.

This issue is particularly prominent in open-source development. Contributors not only need to invest time in training models but also must invest computational resources, making open-source development inefficient.

Indeed, individuals can allocate significant resources to run AI models, just as users can allocate computational resources to run their own blockchain nodes.

However, this does not fundamentally solve the problem, as the computational power is still insufficient to complete the relevant tasks.

Independent developers or researchers cannot participate in the development of large AI models like LLaMA simply because they cannot afford the computational costs required to train the models: thousands of GPUs, data centers, and additional infrastructure are needed.

Here are some scale-aware data points:

→ Elon Musk stated that the latest Grok 3 model was trained using 100,000 Nvidia H100 GPUs.

→ Each chip is valued at approximately $30,000.

→ The total cost of the AI chips used to train Grok 3 is about $3 billion.

This problem is somewhat similar to the process of building a startup, where individuals may have the time, technical skills, and execution plans but initially lack sufficient resources to realize their vision.

As @dbarabander pointed out, traditional open-source software projects only require contributors to donate time, while open-source AI projects require both time and substantial resources, such as computational power and data.

Relying solely on goodwill and volunteer efforts is insufficient to incentivize enough individuals or groups to provide these costly resources. Additional incentive mechanisms are a necessary condition to drive participation.

2. Cryptographic technology is the best tool for achieving incentive alignment

Incentive alignment refers to the establishment of rules that encourage participants to contribute to the system while also benefiting themselves.

Cryptographic technology has countless successful cases in helping different systems achieve incentive alignment, with one of the most notable examples being the decentralized physical infrastructure network (DePIN) industry, which perfectly aligns with this concept.

For instance, projects like @helium and @rendernetwork have achieved incentive alignment through distributed nodes and GPU networks, becoming exemplars.

So, why can't we apply this model to the AI field to make its ecosystem more open and accessible?

It turns out that we can.

The core of driving Web3 and cryptographic technology development lies in "ownership."

You own your data, you own your incentive mechanisms, and even when you hold certain tokens, you own a part of the network. Granting ownership to resource providers can incentivize them to contribute their assets to the project, expecting to gain returns from the success of the network.

To make AI more widespread, cryptographic technology is the optimal solution. Developers can freely share model designs across projects, while computational and data providers can exchange resources for ownership shares (incentives).

3. Incentive alignment is closely related to verifiability

If we envision a decentralized AI system with proper incentive alignment, it should inherit some characteristics of classic blockchain mechanisms:

  1. Network Effects.

  2. Lower initial requirements, where nodes can earn returns through future profits.

  3. Slashing Mechanisms to penalize malicious actors.

Especially regarding slashing mechanisms, we need verifiability. If we cannot verify who the malicious actors are, we cannot punish them, which would make the system highly susceptible to exploitation by cheaters, especially in cross-team collaborations.

In a decentralized AI system, verifiability is crucial because we do not have a centralized trust point. Instead, we pursue a system that is trustless but verifiable. Here are several components that may require verifiability:

  • Benchmark Phase: The system outperforms others on certain metrics (such as x, y, z).

  • Inference Phase: Whether the system operates correctly, i.e., the "thinking" phase of the AI.

  • Training Phase: Whether the system has been properly trained or adjusted.

  • Data Phase: Whether the system has correctly collected data.

Currently, there are hundreds of teams building projects on @eigenlayer, but I have recently noticed that AI is receiving more attention than ever, and I am pondering whether this aligns with its original restaking vision.

Any AI system that hopes to achieve incentive alignment must be verifiable.

In this context, slashing mechanisms are equivalent to verifiability: if a decentralized system can punish malicious actors, it means it can identify and verify the existence of these malicious behaviors.

If the system is verifiable, then AI can leverage cryptographic technology to access global computational and data resources, thereby creating larger and stronger models. Because more resources (computation + data) typically lead to better models (at least in the current technological landscape).

@hyperbolic_labs has already demonstrated the potential of collaborative computing resources. Any user can rent GPUs to train AI models that are more complex than they can run at home, and at a lower cost.

How to make AI verification both efficient and verifiable?

Some may argue that there are now many cloud solutions available to rent GPUs, which have already solved the computational resource issue.

However, cloud solutions like AWS or Google Cloud are highly centralized and employ a so-called "waitlist strategy," artificially creating a false sense of high demand, thereby driving up prices. This phenomenon is particularly common in the oligopolistic landscape of the field.

In reality, there are vast amounts of GPU resources sitting idle in data centers, mining farms, and even in the hands of individuals, which could have been used for computational contributions to AI model training but are wasted.

You may have heard of @getgrass_io, which allows users to sell unused bandwidth to businesses, thus avoiding waste of bandwidth resources and earning some rewards.

I am not saying that computational resources are infinite, but any system can achieve a win-win situation through optimization: on one hand, providing a more open market for those who need more resources for AI model training; on the other hand, allowing those who contribute these resources to receive corresponding rewards.

The Hyperbolic team has developed an open GPU market. Here, users can rent GPUs for AI model training, saving up to 75% of costs, while GPU providers can monetize idle resources and earn profits.

Here is an overview of how it works:

Hyperbolic organizes connected GPUs into clusters and nodes, allowing computational power to scale according to demand.

The core of this architectural model is the "Proof of Sampling" model, which features sampling processing of transactions: by randomly selecting and verifying transactions, it reduces workload and computational demands.

The main issue arises during the AI inference process, where each inference run on the network needs to be verified, and it is best to avoid significant computational overhead from other mechanisms.

As I mentioned earlier, if something can be verified, then if the verification results indicate that the behavior violates the rules, it must be punished (slashing).

When Hyperbolic adopts the Adaptive Verification System (AVS) model, it adds more verifiability to the system. In this model, validators are randomly selected to verify output results, thus achieving incentive alignment—under this mechanism, dishonest behavior is unprofitable.

To train an AI model and make it more refined, two main resources are needed: computational power and data. Renting computational power is one solution, but we still need to obtain data from somewhere, and we need diverse data to avoid potential biases in the model.

Verifying data from different sources for AI

The more data there is, the better the model; but the problem is that you usually need diverse data. This is a major challenge faced by AI models.

Data protocols have existed for decades. Whether the data is public or private, data brokers collect this data in some way, possibly paying for it or not, and then sell it for profit.

The problems we face when acquiring suitable data for AI models include: single points of failure, censorship, and the lack of a trustless way to provide authentic and reliable data to "feed" AI models.

So, who needs such data?

First, there are AI researchers and developers who wish to train and infer their models using real and appropriate inputs.

For example, OpenLayer allows anyone to add data streams to the system or AI models without permission, and the system can record each available data point in a verifiable manner.

OpenLayer also utilizes zkTLS (Zero-Knowledge Transport Layer Security Protocol), which I have detailed in my previous articles. This protocol ensures that the data reported by operators is indeed obtained from the source (verifiability).

Here’s how OpenLayer works:

  1. Data consumers publish data requests to OpenLayer's smart contracts and retrieve results using an API similar to a primary data oracle (on-chain or off-chain).

  2. Operators register through EigenLayer to secure the staked assets of OpenLayer AVS and run the AVS software.

  3. Operators subscribe to tasks, process, and submit data to OpenLayer while storing the raw responses and proofs in decentralized storage.

  4. For variable results, aggregators (special operators) standardize the outputs.

Developers can request the latest data from any website and connect it to the network. If you are developing an AI-related project, you can obtain reliable real-time data.

After discussing the AI computation process and how to obtain verifiable data, the next focus should be on the two core components of AI models: the computation itself and its verification.

AI computation must be verified to ensure correctness

Ideally, nodes must prove their computational contributions to ensure the proper functioning of the system.

In the worst-case scenario, nodes may falsely claim to provide computational power while actually doing no real work.

Requiring nodes to prove their contributions ensures that only legitimate participants are recognized, thus avoiding malicious behavior. This mechanism is very similar to traditional Proof of Work, with the difference being the type of work performed by the nodes.

Even if we incorporate appropriate incentive alignment mechanisms into the system, if nodes cannot prove that they have completed certain work without permission, they may receive rewards that do not correspond to their actual contributions, potentially leading to unfair reward distribution.

If the network cannot assess computational contributions, it may result in some nodes being assigned tasks beyond their capabilities while others remain idle, ultimately causing inefficiencies or system failures.

By proving computational contributions, the network can quantify each node's efforts using standardized metrics (such as FLOPS, floating-point operations per second). This method allows rewards to be allocated based on actual work completed rather than merely on whether a node exists in the network.

The team from @HyperspaceAI has developed a "Proof-of-FLOPS" system that allows nodes to lease unused computational power. In exchange, they receive "flops" points, which serve as the network's universal currency.

Here’s how the architecture works:

  1. The process begins with a challenge issued to the user, who responds by submitting a commitment to the challenge.

  2. Hyperspace Rollup manages the process, ensuring the security of submissions and obtaining random numbers from oracles.

  3. The user publicly indexes, completing the challenge process.

  4. Operators check the responses and notify the Hyperspace AVS contract of valid results, which are then confirmed through the EigenLayer contract.

  5. Calculate the Liveness Multipliers and grant flops points to the user.

Proving computational contributions provides a clear picture of each node's capabilities, allowing the system to intelligently allocate tasks—assigning complex AI computation tasks to high-performance nodes while delegating lighter tasks to lower-capacity nodes.

The most interesting part is how to make this system verifiable, so that anyone can prove the correctness of the completed work. The AVS system of Hyperspace continuously sends challenges, requests random numbers, and executes multi-layer verification processes, as illustrated in the architecture diagram above.

Operators can confidently participate in the system because the results are verified and the reward distribution is fair. If the results are incorrect, malicious actors will undoubtedly face penalties (slashing).

There are many important reasons for verifying AI computation results:

  • Encourage nodes to join and contribute resources.

  • Fairly distribute rewards based on effort.

  • Ensure contributions directly support specific AI models.

  • Effectively allocate tasks based on nodes' verification capabilities.

Decentralization and verifiability of AI

As @yb_effect pointed out, "decentralized" and "distributed" are entirely different concepts. Distributed merely refers to hardware being spread across different locations, but there still exists a centralized connection point.

Decentralization means there is no single master node, and the training process can handle faults, similar to how most blockchains operate today.

If an AI network is to achieve true decentralization, it needs to adopt multiple solutions, but one thing is certain: we need to verify almost everything.

If you want to build an AI model or agent, you need to ensure that every component and every dependency is verified.

Inference, training, data, oracles—all of these can be verified, thereby introducing cryptographic rewards compatible with incentives into the AI system, making the system fairer and more efficient.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink