Sam Altman: Next year, OpenAI will enter the era of AI systems.

CN
4 hours ago

Enhancing "reasoning ability" remains the core goal of this large model manufacturer.

Written by: 21VC

Translated by: Mu Mu

Edited by: Wen Dao

After GPT-4, what big moves is OpenAI planning for next year? Where is OpenAI's moat? What is the value of AI Agents? With many long-time employees "leaving," will OpenAI choose younger individuals with more conviction and vitality?

On November 4, OpenAI CEO Sam Altman (hereinafter referred to as "Altman") answered these questions on "The Twenty Minute VC" podcast, clearly stating that enhancing reasoning ability has always been OpenAI's core strategy.

When podcast host and 21VC founder Harry Stebbings (hereinafter referred to as "Stebbings") asked what opportunities OpenAI could leave for AI entrepreneurs, Altman believed that if AI entrepreneurs continue to focus on solving the shortcomings of models, this business model will lose competitiveness as OpenAI's models upgrade. Entrepreneurs should build businesses that can benefit as models improve, which will be a huge opportunity.

In Altman's view, the way people discuss AI now is somewhat outdated; compared to models, systems are a more worthy direction of focus, and next year will be a key year for OpenAI to move towards AI systems.

Here are the highlights from Stebbings' conversation with Altman:

OpenAI Plans to Create No-Code Tools

Stebbings: I'll start today's interview with a question from the audience: Is OpenAI's future direction to launch more models like GPT-3.5 or to train larger and stronger models?

Altman: We will comprehensively optimize our models; enhancing reasoning ability is the core of our current strategy. I believe that strong reasoning ability will unlock a range of functionalities we anticipate, including making substantial contributions in scientific research and writing highly complex code, which will greatly drive societal development and progress. You can expect continuous and rapid iteration and optimization of the GPT series models; this will be the focus and priority of our future work.

Sam Altman accepts a podcast interview with 21VC founder Harry Stebbings

Stebbings: Will OpenAI develop no-code tools for non-technical users in the future, allowing them to easily build and scale AI applications?

Altman: There is no doubt that we are steadily moving towards this goal. Our initial plan is to significantly enhance the productivity of programmers, but in the long run, our goal is to create top-notch no-code tools. Although there are already some no-code solutions on the market, they currently do not fully meet the needs of creating a complete startup in a no-code manner.

Stebbings: In the future, in which areas of the technology ecosystem will OpenAI expand? Considering that OpenAI may dominate at the application level, is it a waste of resources for startups to invest heavily in optimizing existing systems? How should founders think about this issue?

Altman: Our goal is to continuously improve our models. If your business is merely to address some minor shortcomings of existing models, then once our models become strong enough and those shortcomings no longer exist, your business model may become uncompetitive.

However, if you can build a business that benefits from the continuous advancement of models, then this will be a huge opportunity. Imagine if someone revealed to you that GPT-4 would become exceptionally powerful, capable of accomplishing tasks that currently seem impossible; you would be able to plan and develop your business from a longer-term perspective.

Stebbings: We previously discussed with venture capitalist Brad Gerstner the potential impact of OpenAI on certain niche markets. From a founder's perspective, which companies might be affected by OpenAI, and which might escape its impact? As investors, how should we assess this issue?

Altman: Artificial intelligence will create trillions of dollars in value; it will give rise to entirely new products and services, making previously impossible or impractical things feasible. In certain areas, we expect models to become powerful enough to make achieving goals effortless; in other areas, this new technology will be further enhanced by building excellent products and services.

In the early days, about 95% of startups seemed to bet that models would not improve, which surprised me; now I am no longer surprised. When GPT-3.5 was just released, we already foresaw the potential of GPT-4, and we knew it would be very powerful.

So, if the tools you build are merely to compensate for the shortcomings of models, then as models continue to improve, those shortcomings will become increasingly irrelevant.

When models performed poorly in the past, people were more inclined to develop products that compensated for model deficiencies rather than build revolutionary products like "AI teachers" or "AI medical advisors." I felt that at that time, 95% of people were betting that models would not improve, while only 5% believed that models would get better.

Now the situation has reversed; people understand the pace of improvement and our direction of development. This issue is no longer as prominent, but we were once very concerned because we foresaw that those companies (working to compensate for model deficiencies) might face difficulties.

Stebbings: You once said, "Artificial intelligence will create trillions of dollars in value," and Masayoshi Son (founder and CEO of SoftBank Group) also predicted that "AI will create $9 trillion in value each year," enough to offset what he considers "necessary $9 trillion in capital expenditure." What are your thoughts on this?

Altman: I cannot provide an exact figure; clearly, significant capital expenditure will also create enormous value, as has been the case with every major technological revolution, and artificial intelligence is undoubtedly one of them.

Next year will be a key year for us as we enter the era of the next generation of AI systems. Regarding the development of no-code software agents you mentioned, I am not sure how long this will take; it is not currently achievable, but if we can reach that goal where everyone can easily obtain the complete set of enterprise-level software they need, it will release tremendous economic value for the world. If you can maintain the same value output while making it more convenient and cost-effective, it will have a huge impact.

I believe we will see more examples like this, including in the fields of healthcare and education, which represent trillions of dollars in markets. If AI can drive new solutions in these areas, I think the specific numbers are not important; what matters is that it will indeed create incredible value.

Excellent AI Agents Have Capabilities Beyond Human Abilities

Stebbings: What role do you think open source will play in the future development of artificial intelligence? How does the discussion on "whether certain models should be open-sourced" take place within OpenAI?

Altman: Open-source models play a crucial role in the artificial intelligence ecosystem. There are already some outstanding open-source models available. I believe that providing high-quality services and APIs is also essential. In my view, it makes sense to offer these elements as a product bundle so that people can choose the solution that best fits their needs.

Stebbings: Besides open source, we can also provide services to customers through Agents. How do you define "Agent"? What do you think it is and what it is not?

Altman: I think an Agent is a program that can perform long-duration tasks with minimal human supervision during the execution of those tasks.

Stebbings: Do you think there are misconceptions about Agents?

Altman: Rather than misconceptions, I would say we have not fully understood the role Agents will play in the future world.

People often mention examples like having AI Agents help with restaurant reservations, such as using OpenTable or directly calling the restaurant. This can indeed save some time, but I think what is more exciting is that Agents can do things that humans cannot, such as contacting 300 restaurants simultaneously to find the best dishes or restaurants that can provide special services. This is nearly an impossible task for humans, but if the Agents are all AI, they can handle this problem in parallel.

While this example is simple, it demonstrates the capabilities of Agents that surpass human abilities. More interestingly, Agents can not only help you make restaurant reservations but can also act like a very smart senior colleague, collaborating with you on a project; or they can independently complete a task that takes two days or even two weeks, only contacting you when they encounter issues and ultimately presenting an excellent result.

Stebbings: Will this Agent model impact the pricing of SaaS (Software as a Service)? Traditionally, SaaS charges based on user seats, but now Agents are effectively replacing human labor. How do you view the changes in future pricing models, especially as AI Agents become a core part of enterprise employees?

Altman: I can only speculate because we really cannot be certain. I can envision a scenario where future pricing models will be determined based on the computing resources you use, such as whether you need 1 GPU, 10 GPUs, or 100 GPUs to solve a problem. In this case, pricing will no longer be based on the number of seats or even the number of Agents, but rather on the actual amount of computing consumed.

Stebbings: Do we need to build dedicated models for Agents?

Altman: Indeed, a lot of infrastructure is needed to support the operation of Agents, but I believe GPT-3.5 has already pointed the way, as it is a general model capable of performing complex Agent tasks.

Models Are Depreciating Assets, but Training Experience Is More Valuable Than Cost

Stebbings: Many people believe that as the trend of model commercialization becomes more pronounced, models are depreciating assets. How do you view this perspective? Currently, the capital intensity of training models is increasing; does this mean that only a few companies can afford such costs?**

Altman: Indeed, models can be seen as depreciating assets, but it is completely wrong to think their value is lower than the cost of training. In fact, during the process of training models, we can achieve a positive compounding effect, meaning the knowledge and experience we gain from training will help us train the next generation of models more efficiently.

I believe the actual revenue we derive from models has proven the rationale behind these investments. Of course, not all companies can achieve such results. Currently, many companies may be training very similar models, but if you fall slightly behind or do not have a product that can continuously attract users and provide value, it may become more difficult to achieve a return on investment.

We are fortunate to have ChatGPT, which is used by hundreds of millions of users, so even with high costs, we can spread these costs across a large user base.

Stebbings: How will OpenAI's models maintain differentiation in the future? In which areas do you hope to expand differentiation the most?

Altman: Reasoning ability is the area we currently value the most, and I believe this will be key to unlocking the next phase of large-scale value creation. Additionally, we will also focus on the development of multimodal models and introduce new features that we believe are crucial for users.

Stebbings: How will visual capabilities expand under the new GPT-3.5 reasoning paradigm?

Altman: Without giving too much away, I expect image models to develop rapidly.

Stebbings: Models from Anthropic are sometimes considered to perform better on programming tasks. What are your thoughts on this? Do you think this assessment is fair? How should developers choose between OpenAI and other providers?

Altman: Anthropic indeed has a model that performs well in programming, and their work is impressive. I believe developers will typically use multiple models simultaneously, and I am not sure how this will change as the field develops. But I believe AI will be ubiquitous in the future.

The way we currently discuss AI may be somewhat outdated; I predict we will shift from discussing "models" to discussing "systems," but this will take time to realize.

Stebbings: Regarding the scaling of models, how long do you think the law of scale for models can last? In the past, people believed it would not last, but it seems to be more enduring than expected.

Altman: Without delving into details, the core question is: will the trajectory of model capability improvement continue as it is now? I believe it will, and it will last for quite a long time.

Stebbings: Did you ever have doubts about this?

Altman: We have indeed encountered some behavior patterns that we could not understand and have experienced some failed training processes, trying various new paradigms. When we approach the limits of a paradigm, we must find the next breakthrough.

Stebbings: What has been the most challenging aspect to deal with in this process?

Altman: During our development of GPT-4, we encountered some extremely tricky problems that left us feeling helpless at times, unsure of how to solve them. Ultimately, we succeeded in overcoming these challenges. But there was indeed a period when we felt lost about how to advance the development of the models.

Additionally, the transition to GPT-3.5 and the concept of reasoning models have been goals we have long aspired to, but the research path to achieving this goal has been filled with challenges and twists.

Stebbings: In such a long and winding process, how do you maintain team morale? How do you keep morale up when the training process may fail?

Altman: The members of our team are all passionate about building Artificial General Intelligence (AGI), which is a highly motivating goal. We all understand that this is not an easy path, and success will not come easily. There is a saying: "I never pray for God to be on my side, but rather that I can be on God's side."

Diving into deep learning feels like engaging in a righteous cause; despite the inevitable setbacks along the way, we seem to always make progress in the end. This steadfast belief is a tremendous help for us.

Stebbings: Regarding the semiconductor supply chain, how concerned are you about the semiconductor supply chain and international tensions?

Altman: I cannot quantify the level of this concern, but there is no doubt that I do feel worried. While it may not be my top concern, it definitely ranks in the top 10% of all the issues I care about.

Stebbings: May I ask what your biggest concern is?

Altman: Overall, my biggest concern is the complexity of trying to accomplish everything across the entire field. While I believe everything will ultimately be resolved, it is indeed an extremely complex system.

This complexity exists at all levels, including within OpenAI and within each team. Take semiconductors as an example; we need to balance power supply, make the right network decisions, ensure we obtain enough chips, while also considering potential risks and whether research progress can match these challenges, so we are not completely caught off guard or waste resources.

The entire supply chain may seem like a straight pipeline, but in reality, the complexity of the ecosystem at each level exceeds what I have seen in any other industry. To some extent, this is what I worry about the most.

Stebbings: You mentioned unprecedented complexity. Many people compare the current AI wave to the internet bubble period, especially when discussing excitement and enthusiasm. I think the difference lies in the scale of funding. Larry Ellison (co-founder of Oracle) once stated that the entry cost for the foundational model race is $100 billion. Do you agree with this view?

Altman: No, I don't think the costs will be that high.** But there is an interesting phenomenon: people like to draw analogies between past technological revolutions and new revolutions to make them seem more familiar. I think, overall, this is not a good habit, but I understand why people do it. I also feel that the AI analogies people choose are particularly inappropriate; the internet is clearly very different from AI.

You mentioned an example regarding costs; whether it truly requires $10 billion or $100 billion to be competitive, one hallmark of the internet revolution is that "it was easy to get started." Another internet-like characteristic is that for many companies, AI is just an extension of the internet—others will build these AI models, and you can leverage them to develop various excellent products. This views AI as a new way of building technology. But if you want to build AI itself, that situation is entirely different.

Another common analogy is electricity, but I think this does not apply in many ways.

While I believe people should not rely too heavily on analogies, my favorite analogy is the transistor, which is a new discovery in physics with incredible scalability that quickly permeated various fields, benefiting the entire tech industry. The products and services we use contain a large number of transistors, but you wouldn't think of the companies creating those products and services as "transistor companies."

This (transistor) is a very complex and expensive industrial process, around which a vast supply chain has formed. This simple physical discovery has led to long-term economic growth, even though most of the time people are not aware of its existence; they just feel, "This thing helps me process information."

Maintaining High Standards for Talent, Rather Than Favoring a Certain Age Group

Stebbings: How do you think human talent is wasted?

Altman: There are many very talented people in the world who are unable to reach their full potential because they work in unsuitable companies, live in countries that do not support excellent companies, or for various other reasons.

One of the things I am most excited about with AI is that it may help us better realize everyone's potential, and we are currently far from doing enough in this regard. I believe there are many potential excellent AI researchers in the world; they just have different life trajectories.

Stebbings: Over the past year, you have experienced incredible rapid growth. Looking back over the past decade, what do you think has been your biggest change in leadership?

Altman: For me, the most unusual thing in recent years has been the speed of change. A normal company grows from zero to $100 million in revenue, then from $100 million to $1 billion, and finally from $1 billion to $10 billion; this usually takes a long time, but we have to complete this process in just two years. We have transformed from a purely research lab into a company that truly serves a large number of customers, and this rapid transition has left me with little time to learn.

Stebbings: What are some areas you wish you had more time to learn about?

Altman: How to guide a company to focus on achieving 10x growth rather than just 10% growth. Growing from a company with billions in revenue to one with hundreds of billions requires profound transformation, not just repeating last week's work.

But the challenge of rapid growth is that we do not have enough time to solidify the foundation. I underestimated the effort required to keep up and continuously push forward in such a fast-growing environment.

Internal communication, information sharing, structured management, and how to balance short-term needs with long-term development are all crucial. For example, to ensure the company’s execution capability in the next year or two, we need to prepare computing resources, office space, etc., in advance. Effective planning in such a fast-growing environment is very challenging.

Stebbings: Keith Rabois (venture capitalist) once said that he learned from Peter Thiel (co-founder of PayPal) that hiring people under 30 is the secret to building great companies. What do you think of this advice, that building a company by hiring very energetic and ambitious young people is the only way?

Altman: I was about 30 when I founded OpenAI, which is not too young, but seems appropriate (laughs). So, this is indeed a path worth trying.

Stebbings: However, while young people are full of energy and ambition, they may lack experience; or should we choose those who are experienced and have proven themselves?

Altman: The obvious answer is that hiring both types of talent can lead to success, just as we have done at OpenAI. Just before today’s interview, I was discussing a young person who just joined our team, who is probably in their early twenties, but their performance has been outstanding. I am thinking about whether we can find more talents like him, as these young people bring new perspectives and energy.

However, on the other hand, if you are designing one of the most complex and costly computing systems in human history, I would not easily entrust that responsibility to a newcomer. Therefore, we need a combination of both types of talent. I think the key is to maintain high standards for talent, rather than simply favoring a certain age group.

I am particularly grateful to Y Combinator (the startup incubator) because it made me realize that a lack of experience does not equate to a lack of value. There are many high-potential individuals early in their careers who can create tremendous value, and our society should invest in these talents, which is a very positive thing.

Stebbings: I recently heard a saying—"The heaviest burden in life is not iron or gold, but the decisions not made." For you, which unmade decision has caused you the most stress?

Altman: The answer to this question changes every day; no single unmade decision is particularly significant. Of course, we do face some major decisions, such as which product direction to choose or how to design the next generation of computers—these are important and risky choices.

In such situations, I might delay the decision, but most of the time, the challenge lies in facing some 51% versus 49% dilemmas daily. These decisions are presented to me because they are hard to resolve, and I may not be more confident than others in the team about making a better choice, but I have to make a decision.

So, the core of the issue lies in the number of decisions rather than any specific one.

Stebbings: When faced with a 51% versus 49% decision, do you have specific people you consult?

Altman: No, I think relying on a single person for everything is not the right approach. For me, a better way is to find 15 or 20 people who have good intuition and background knowledge in specific areas and consult the best experts when needed, rather than relying on a single advisor.

Quick Q&A

Stebbings: Suppose you are a 23 or 24-year-old today, considering the existing infrastructure, what would you choose to do?

Altman: I would choose a vertically integrated area supported by AI, such as AI education. I would develop the best AI education products to enable people to learn knowledge in any field. Similar examples could include AI lawyers, AI CAD engineers, etc.

Stebbings: You mentioned writing a book; what would you name it?

Altman: I haven't thought of a name yet. I haven't given this book much thought; I just feel that its existence would inspire many people's potential. It might relate to the theme of "human potential."

Stebbings: In the field of AI, what direction do you think deserves more attention that people are currently overlooking?

Altman: What I hope to see is an AI that can understand your entire life. It doesn't need infinite context, but I hope there can be some way for you to have an AI agent that understands all your data and can assist you.

Stebbings: Has anything surprised you in the past month?

Altman: There is a research result I cannot disclose, but it is shocking.

Stebbings: Who is your most respected competitor? Why?

Altman: I actually respect everyone in this field; it is filled with outstanding talent and excellent work. I am not intentionally avoiding the question; I just see talented people doing remarkable work everywhere.

Stebbings: Is there a specific one?

Altman: No, there isn't a particular one.

Stebbings: Which OpenAI API is your favorite?

Altman: The new real-time API is fantastic; we now have a large API business with many great features.

Stebbings: Who do you respect the most in the AI field today?

Altman: I want to specifically mention the Cursor team; they have brought a magical experience with AI and created a lot of value for people. Many people have not been able to piece together all the elements, but they have done it. I intentionally did not mention anyone from OpenAI; otherwise, this list would be very long.

Stebbings: How do you view the trade-off between latency and accuracy?

Altman: There needs to be a knob that can adjust between the two. Just like now, you want me to answer questions quickly, and I try not to take a few minutes to think; at this point, latency becomes important. If you want me to make a significant discovery, you might be willing to wait for years. The answer is, this should be controllable by the user.

Stebbings: When you think about insecurity in leadership, what do you think you need to improve the most? As a leader and CEO, what do you most want to enhance?

Altman: Recently, I feel more uncertain than before about what the details of our product strategy should be. Overall, I feel that product is my weak point, and the company now needs me to provide a clearer product vision. We have a great product lead and team, but this is an area I wish I were better at, and I have felt this particularly strongly lately.

Stebbings: You hired Kevin Scott (OpenAI's CTO); I have known him for many years, and he is excellent. What qualities of Kevin make him a world-class product leader?

Altman: "Discipline" is the first word that comes to mind.

Stebbings: Specifically, what does that refer to?

Altman: He is very focused on priorities, knows what to say no to, and can think from the user's perspective about why to do or not do something. He is really rigorous and does not have fanciful ideas.

Stebbings: Looking ahead five and ten years, if you had a magic wand to outline OpenAI's vision for the next five and ten years, what would it look like?

Altman: I can easily outline the next two years, but if we guess right and start creating some super-strong systems, such as in scientific advancement, this will lead to incredible technological progress.

I believe that in five years, we will see an astonishing pace of technological advancement, even exceeding everyone's expectations. Society may feel that "the moment of AGI has come and gone"; we will discover many new things, not only in AI research but also in other scientific fields.

On the other hand, I think the changes (brought by technological advancement) to society are actually relatively limited.

For example, if you had asked people five years ago: "Will computers pass the Turing test?" they would probably say: "No." If you told them: "Yes," they would think this would bring about a huge transformation in society. Now, looking at it, we have indeed roughly passed the Turing test, but the changes in society have not been that dramatic.

This is my expectation for the future: that technological advancements will continually break all expectations, while societal changes will be relatively slow. I think this is a good and healthy state. In the long run, technological progress will certainly bring about significant changes to society, but it will not be reflected so rapidly within five to ten years.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink