AI's obsession with GPUs has made us overlook a cheaper and smarter solution.

CN
10 days ago

Source: Cointelegraph Original: "{title}"

Opinion by Naman Kabra, Co-founder and CEO of NodeOps Network

Graphics Processing Units (GPUs) have become the default hardware for many artificial intelligence workloads, especially when training large models. This idea is ubiquitous. While this notion is reasonable in certain cases, it also creates a blind spot that hinders our progress.

GPUs are renowned. They have an incredible ability to process massive amounts of data in parallel, making them ideal for training large language models or running high-speed AI inference. This is also why companies like OpenAI, Google, and Meta spend vast sums to build GPU clusters.

While GPUs may be the preferred choice for running AI, we must not forget about Central Processing Units (CPUs), which are still very powerful. Overlooking this fact could cost us time, money, and opportunities.

CPUs are not obsolete. More people need to recognize that they can be used for AI tasks. They are idle in millions of machines worldwide, capable of efficiently and economically running various AI tasks, as long as we give them a chance.

It is not hard to see how we got to this point. GPUs are designed for parallelism. They can process massive amounts of data simultaneously, making them well-suited for tasks like image recognition or training chatbots with billions of parameters. CPUs are not equipped for these jobs.

AI is not just about model training. It is not just high-speed matrix math. Today, AI includes tasks like running smaller models, interpreting data, managing logical chains, making decisions, retrieving documents, and answering questions. These are not merely "stupid math" problems. They require flexible thinking. They require logic. They require CPUs.

While GPUs grab all the headlines, CPUs quietly handle the backbone of many AI workflows, especially when you scale up the actual operation of AI systems in the real world.

Recently: "Our GPUs are melting" -- OpenAI sets limits after the Ghibli tsunami.

CPUs are impressive in their designed use: flexible, logic-based operations. They can handle one or a few tasks simultaneously quite well. Compared to the massive parallelism of GPUs, this may not sound impressive, but many AI tasks do not require such firepower.

Consider autonomous agents, those fancy tools that can leverage AI to perform tasks like searching the web, writing code, or planning projects. Of course, an agent may call a large language model running on a GPU, but everything associated with it, including logic, planning, and decision-making, can run perfectly well on a CPU.

Even inference (the AI term for using a model after it has been trained) can be done on a CPU, especially when the model is smaller, optimized, or running in situations where ultra-low latency is not required.

CPUs can handle a wide range of AI tasks very well. However, we are so focused on GPU performance that we are not leveraging the technology we already have at our disposal.

We do not need to keep building expensive, GPU-filled new data centers to meet the growing demands of AI. We just need to utilize existing data centers effectively.

This is where things get interesting. Because now we have a way to truly do this.

DePINs, or Decentralized Physical Infrastructure Networks, are a viable solution. While it sounds complicated, the idea is simple: people contribute their idle computing power (like unused CPUs) to a global network for others to use.

You can run AI workloads on a decentralized CPU network anywhere in the world, rather than renting time on a GPU cluster from a centralized cloud provider. These platforms create a peer-to-peer computing layer that can securely distribute, execute, and verify jobs.

This model has several obvious benefits. First, it is cheaper. When CPUs can do the job well, you do not need to pay high prices to rent scarce GPUs. Second, it scales naturally.

As more people connect their machines to the network, the available computing power will grow. Third, it brings computing closer to the edge. Tasks can run on machines near where the data resides, reducing latency and enhancing privacy.

Think of it as Airbnb for computing. Instead of building more hotels (data centers), we should better utilize the vacant rooms (idle CPUs) that people already have.

By shifting our mindset and using decentralized networks to route AI workloads to the right type of processor, routing to GPUs when needed and to CPUs when possible, we can achieve scale, efficiency, and resilience.

It is time to stop treating CPUs as second-class citizens in the AI space. Yes, GPUs are crucial. No one denies that. CPUs are everywhere. Their utilization rates are low, but they are still fully capable of powering many AI tasks that we care about.

Instead of pouring more money into the GPU shortage issue, we should ask a smarter question: Are we utilizing the computing power we already have?

As decentralized computing platforms gradually connect idle CPUs to the AI economy, we have a tremendous opportunity to rethink how we scale AI infrastructure. The real constraint is not just the availability of GPUs. It is a shift in mindset. We are accustomed to chasing high-end hardware while overlooking the untapped potential of idle resources in the network.

Opinion by Naman Kabra, Co-founder and CEO of NodeOps Network.

This article is for general information purposes only and is not intended and should not be construed as legal or investment advice. The views, thoughts, and opinions expressed in this article are solely those of the author and do not necessarily reflect or represent the views and opinions of Cointelegraph.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Gate:注册解锁$6666
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink