Participants in Web3 should focus more on niche scenarios and fully leverage their unique advantages in censorship resistance, transparency, and social verifiability.
Author: David & Goliath
Compiled by: Deep Tide TechFlow
Currently, the AI industry's computing and training processes are mainly dominated by centralized Web2 giants. These companies dominate the market with their strong capital strength, cutting-edge hardware, and vast data resources. While this situation may persist in developing the most powerful general machine learning (ML) models, Web3 networks may gradually become a more economical and accessible source of computing resources for mid-range or customized models.
Similarly, when inference demands exceed the capabilities of personal edge devices, some consumers may choose Web3 networks for less censorship and more diverse outputs. Instead of attempting to completely disrupt the entire AI technology stack, Web3 participants should focus on these niche scenarios and fully leverage their unique advantages in censorship resistance, transparency, and social verifiability.
The hardware resources required to train the next generation of foundational models (such as GPT or BERT) are scarce and expensive, and the demand for the highest performance chips will continue to exceed supply. This resource scarcity results in hardware being concentrated in the hands of a few well-funded leading companies, which utilize this hardware to train and commercialize the most performant and complex foundational models.
However, the pace of hardware obsolescence is extremely fast. So, how will outdated mid-range or low-performance hardware be utilized?
This hardware is likely to be used to train simpler or more targeted models. By matching different categories of models with hardware of varying performance, optimal resource allocation can be achieved. In this case, Web3 protocols can play a key role by coordinating access to diverse, low-cost computing resources. For example, consumers can use simple mid-range models trained on personal datasets and only opt for high-end models trained and hosted by centralized companies when handling more complex tasks, while ensuring user identities are hidden and prompt data is encrypted.
In addition to efficiency issues, concerns about bias and potential censorship in centralized models are also increasing. The Web3 environment is known for its transparency and verifiability, capable of providing training support for models that have been overlooked or deemed too sensitive by Web2. Although these models may not be competitive in terms of performance and innovation, they still hold significant value for certain social groups. Therefore, Web3 protocols can carve out a unique market in this area by offering more open, trustworthy, and censorship-resistant model training services.
Initially, centralized and decentralized approaches can coexist, each serving different use cases. However, as Web3 continues to improve in developer experience and platform compatibility, and as the network effects of open-source AI gradually become apparent, Web3 may ultimately compete in the core domains of centralized enterprises. Particularly as consumers become increasingly aware of the limitations of centralized models, the advantages of Web3 will become more pronounced.
In addition to training mid-range or specialized models, Web3 participants also have the advantage of providing more transparent and flexible inference solutions. Decentralized inference services can bring various benefits, such as zero downtime, modular combinations of models, public model performance evaluations, and more diverse, uncensored outputs. These services can also effectively avoid the "vendor lock-in" problem that consumers face when relying on a few centralized providers. Similar to model training, the competitive advantage of decentralized inference layers does not lie in computing power itself, but in addressing some long-standing issues, such as the transparency of closed-source fine-tuning parameters, lack of verifiability, and high costs.
Dan Olshansky proposed a promising vision, that is, through POKT's AI inference routing network, to create more opportunities for AI researchers and engineers to put their research into practice and earn additional income through customized machine learning (ML) or artificial intelligence (AI) models. More importantly, this network can promote fairer competition in the inference services market by integrating inference results from different sources (including decentralized and centralized providers).
Although optimistic predictions suggest that the entire AI technology stack may eventually migrate on-chain, this goal still faces significant challenges of data and computing resource centralization, as these resources provide existing giants with a significant competitive advantage. However, decentralized coordination and computing networks demonstrate unique value in providing more personalized, economical, open competition, and censorship-resistant AI services. By focusing on these critical niche markets, Web3 can establish its own competitive barriers, ensuring that the most influential technologies of this era can evolve in multiple directions, benefiting a broader range of stakeholders rather than being monopolized by a few traditional giants.
Finally, I would like to extend my special thanks to all members of the Placeholder Investment team, as well as Kyle Samani from Multicoin Capital, Anand Iyer from Canonical VC, Keccak Wong from Nectar AI, Alpin Yukseloglu from Osmosis Labs, and Cameron Dennis from NEAR Foundation, who provided reviews and valuable feedback during the writing of this article.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。