The Sino-U.S. AI race is approaching the 2027 critical point, WHY Monad?

CN
1 day ago

Monad is the best choice for building blockchain AI projects.

Author: Harvey C

Since last year, global enthusiasm for artificial intelligence has been on the rise. Whether it’s overseas tech giants or domestic research institutions, there has been a frenzy of investment and release schedules for AI models.

Today, we will discuss the huge potential and opportunities of AI in the future, focusing on the current state of the AI race between China and the U.S., expectations for possible milestone moments in 2027, and why AI projects should choose to build on Monad.

1. The AI Race Between China and the U.S.: Rapid Progress Despite Limited Computing Power

In recent years, the U.S. has drawn significant attention for its blockade of AI chips to the Chinese AI industry. However, in terms of actual results, the "hardware bottleneck" has not significantly delayed the research progress of AI on the mainland as imagined. For instance, the iteration speed of large models in China and the U.S. has narrowed down to just a few months or even shorter.

  • The strength of the followers: OpenAI's "o1-preview" was launched just four months ago, and the official version of o1 was only released a month ago. Almost simultaneously, the mainland has developed reasoning models that are comparable in metrics. The quantitative giant Huanshu's model company DeepSeek has released the open-source reasoning model DeepSeek-R1, which is on par with o1 in several metrics and even more "down-to-earth" in some customized scenarios. Previously, the release of DeepSeek's V3 strongly made Llama 4 feel the pressure of Eastern overtaking. Additionally, Kimi released a new reinforcement learning model k1.5, the first multimodal model similar to o1 after OpenAI, capable of joint reasoning for both text and images.

  • The explosion of voice multimodality: Doubao's advanced voice mode for GPT-4o is comparable to Gemini 2.0 and GPT-4o, and has also quickly appeared in the domestic market. These technologies were originally thought to be exclusive to high computing power, but evidently, mainland manufacturers have achieved rapid iteration under insufficient computing power through various workarounds and optimization algorithms.

From this series of signs, it can be seen that even with existing gaps in computing power, AI researchers on the mainland are quickly following in the footsteps of their overseas counterparts. As long as there are predecessors paving the way through trial and error, later entrants can often save on expensive "costs of hitting walls."

2. The Experience of Overseas Leaders Provides "Copying Homework" Opportunities for Followers

In the years of rapid development of deep learning, the industry's understanding of AI paradigms has also been continuously evolving. Large language models (LLMs) have become the current hot topic, but at the same time, another route—Reinforcement Learning (RL)—is regaining more attention.

Evolution of AI Paradigms

  • AI paradigms change with the times: The development of artificial intelligence is divided into several main branches. Symbolism (logic-based AI) focuses on rule reasoning and formal logic, excelling at handling deterministic tasks; connectionism (neural networks) mimics the brain's computational methods, recognizing patterns from data through hierarchical structures. Bayesian AI (probabilistic AI) emphasizes modeling uncertainty through probability, reinforcement learning (RL) optimizes behavior through trial and error in dynamic environments, evolutionary AI uses principles of natural selection to evolve solutions, and hybrid AI combines the advantages of multiple paradigms to create more powerful and flexible systems. These paradigms have evolved over time, each excelling in its own way.

  • Rapid replication and iteration: When overseas leads the way, the mainland can quickly follow up at a lower cost after validating the model. For example, multimodal and even o1 reasoning models. The old path of R&D in Europe and the U.S. during the internet era and the commercialization in the mainland feel reminiscent, as the paths have already been paved in Europe and the U.S., allowing mainland teams to replicate and improve, often significantly shortening the time required.

Mainland's DeepSeek puts immense pressure on American competitors by providing enhanced training at a lower cost.

On one hand, leading overseas laboratories have laid the groundwork for frontier exploration in the industry; on the other hand, mainland teams are not merely passively following but are continuously "integrating and applying" knowledge, and under relatively limited computing resources, they are finding alternative paths to develop more flexible solutions in reinforcement learning and reasoning.

3. Reinforcement Learning May Be a New Breakthrough for AI in the Near Future

DeepSeek's recent model replaces most supervised fine-tuning (SFT) with reinforcement learning, thereby reducing reliance on labeled data in specific domains. This opens up the possibility of generalization in vertical fields: as long as there is a clear reward function, the model can continuously improve its reasoning ability through self-iteration. In the open-source sharing of 1, I-type explores itself in a reinforcement learning environment, allowing the model to possess a certain degree of self-optimization capability.

This means that without the need for a large amount of data, as long as there is sufficient computing power, the model can rely on RL paths for continuous evolution. Although the U.S. imposes restrictions on China in terms of hardware and technology, this "alternative path" method reduces the demand for traditional large-scale training resources, allowing more mainland researchers to see the feasibility of rapid catch-up, which is worth emulating for other organizations hoping to develop their own large model capabilities.

4. 2027 May Be the AI Moment, Many Jobs Will Be Replaced or Redefined

The rapid advancement of AI makes us ponder how fast the pace is and where the limits lie. Recently, Anthropic's CEO Dario stated: "By 2027, we will see models surpassing human capabilities in most fields," with the underlying reasons being:

  • Continuous iteration of reinforcement learning: Unlike the past method of strictly separating "training-testing-reasoning," in the context of highly developed deep learning, the next wave of breakthroughs in AI capabilities may adopt a hybrid approach, such as combining reinforcement learning for self-iteration, "online" reflection, and updates, allowing their cognitive abilities to leap forward rapidly.

  • Strong support from industrial capital: Chip manufacturing leader TSMC recently provided expectations during its earnings release, indicating that the compound annual growth rate of AI business could reach around 45% from 2024 to 2029, and by 2029, it could be nearly 20 times that of 2024. The underlying demand mainly comes from the computing power investments of giants like OpenAI. This also indicates that AI will penetrate almost all imaginable application fields.

AI is becoming the fastest-growing area of TSMC's future business.

2027 is likely to become a key node for the rapid rise of AI capabilities. Around this time, many jobs that previously required human evaluation and creation will gradually be replaced or redefined by models, leading to profound changes in social forms and economic models.

For example, I really like a taxonomy proposed by Benjamin Bloom, which has undergone a series of revisions, known as Bloom's Taxonomy, which divides human cognitive domains into six major categories. The rapid development of AI may mean that by 2027, apart from evaluation and creation, large models may be able to accomplish all tasks. There is not much time left for humanity; what should humans do? This is worth opening another topic.

Blade Runner (1982)

5. Why AI Projects Should Choose to Develop in the Monad Ecosystem

Back to the main topic, many people still associate the combination of blockchain and AI with "concept hype." However, I believe that the integration of AI projects with Monad's technical characteristics can provide unique application value, making Monad the best choice for building blockchain AI projects for the following reasons:

  • Rich data from EVM itself: Obtaining diverse and high-quality data is crucial for training more accurate and powerful models. The EVM-based ecosystem has already accumulated a large amount of contract data, transaction data, and user behavior, all of which can provide contextual training for AI. Compared to some nascent blockchain environments, the EVM ecosystem is already mature and can provide a "data treasure trove" for AI training and reasoning.

  • MonadDB database adapts to real-time data: Capturing real-time network information is vital for the timeliness of AI, especially in the DeFi field. MonadDB's high-speed data acquisition capability and low gas fees provide strong support for models to retrieve the latest on-chain or off-chain information at any time.

  • High-speed and low-cost environment: Monad achieves a throughput of 10,000 transactions per second, with a block generation time of 1 second, providing a friendly environment for the economic behavior of AI agents through on-chain transactions, collateral, payments, and other operations. The Monad EVM environment offers an efficient and low-cost landing environment, equipping AI agents with "real tools." Low transaction fees make "high-frequency, small transactions" possible, further enriching the use cases for AI agent interactions.

  • Parallel execution: Existing research shows that hybrid AI collaboration can handle more complex and challenging tasks. Monad's parallel execution environment allows Agentic Swarms to run multiple models concurrently on-chain.

In summary, the goal is to enable existing or future AI entities to build on Monad, thereby achieving a smooth user experience in the EVM environment and leveraging various existing DeFi protocols and infrastructure to lay the foundation for creating the next generation of AI experiences.

6. An Invitation to Chinese-speaking AI Developers

As my colleagues Jing and Evan have frequently mentioned in their X posts regarding various AI articles and case studies, Monad has always welcomed and supported AI projects to take root in the Monad ecosystem. I would like to specifically explain the reasons from the perspective of Chinese-speaking AI developers:

  • In the context of the AI competition between China and the U.S., Chinese-speaking developers may face limitations in cross-border resources or collaborations.

  • However, the blockchain world emphasizes globalization and decentralization. On Monad, you can target global users without worrying about being "labeled," while leveraging the advantages of decentralization to circumvent geopolitical and technological barriers.

Monad sincerely invites all AI practitioners and developers to join the Monad ecosystem to create innovative products that belong to users and sovereign selves. Ambitious entrepreneurs are welcome to develop on Monad, committed to helping you succeed through various supports. These developer supports include the ongoing evm/accathon, Mach Accelerator, Jumpstart Program, The Studio, The Foundry, Monad Madness, and more. Whether you want to create AI financial trading models on the blockchain or build decentralized robots with adaptive learning, you can find faster, lower-cost, and more open technical support on Monad.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink