Missed the funding for AI Agent

CN
4 days ago

If you missed the funding for AI Agents, or if you only have funds on exchanges, what is the best asset to buy if you want to keep up with AI Agents?

I guess their best choice is distributed GPU computing power.

High-quality AI Agents require a lot of GPUs. Let me explain a bit. There are currently three technical approaches to creating AI Agents.

1) Prompt optimization. This is the simplest type of AI Agent, which uses existing large models like GPT or Claude, by setting prompts to create a persona.

For example, you tell GPT to act as a beautiful, cute, pure, gentle, and caring person who has loved you since childhood. Then, when I chat with this AI Agent, it feels like I'm talking to my dream girlfriend.

This type of AI Agent is very simple, and current platforms like Virtual and Hat also provide this functionality.

2) RAG. RAG is commonly used for simple customer service, knowledge base Q&A, etc. RAG involves vectorizing relevant documents, and when you ask a question, it first finds the content in the knowledge base related to that question, and then submits it to GPT. GPT can then use the referenced content as a basis, resulting in a more excellent response.

For example, while chatting with your dream girlfriend, you realize she doesn't know your past. So you write a detailed 10,000-word introduction about yourself. After using RAG, little Meng will praise you, saying, "Brother, you're amazing! Last time you were so brave, you killed six cockroaches."

3) Fine-tuning. The term "fine-tuning" in Chinese always feels a bit low. In fact, it has the highest technical content among these three approaches and is often the best solution. Fine-tuning can be simply understood as combining a knowledge base with a large model to train your own model.

Because RAG has its technical limitations, and each time you submit context, the token count can be very large, the cost of using a large model can be high. It is more suitable for individuals with low-frequency usage.

Most people will use open-source large models like LLaMA3 and Qwen2.5, combined with their own data, and after fine-tuning, they can create a free large model with professional capabilities exceeding GPT-4. For example, the AI Agent that produces Goat is trained using LLaMA3 along with blockchain knowledge.

So we can see that fine-tuning is definitely the main theme of AI Agents, and fine-tuning requires GPU computing power. This creates a significant demand for GPU computing power.

The second area is creating images and videos, which also requires GPU computing power. This is something everyone is familiar with, so I won't elaborate further.

Finally, GPU assets include IO, ATH, and Clore.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink