Sam Altman: The best time to start a business is now!
Source: Tencent Technology
In today's tech world, Sam Altman is undoubtedly one of the most influential figures for the future. As the leader of OpenAI, Altman and his team have made groundbreaking progress in machine learning, generative artificial intelligence, and the recently launched large language models (LLMs) with PhD-level reasoning capabilities. And this is just the beginning of their journey. In Altman's latest article, he boldly predicts that superintelligent AI (ASI) will arrive in a few thousand days. So how did they get to this point?
In the "How to Build the Future" interview series, Garry Tan, the president and CEO of the well-known American startup incubator Y Combinator (YC), conducted an exclusive interview with Altman. They delved into the origins of OpenAI, the company's future development direction, and Altman provided valuable advice for entrepreneurs facing platform transformation challenges.
Here is the full interview with Sam Altman:
01 Now is the Best Time to Start a Business!
Garry Tan: Let's talk about your latest article on the topic of the intelligent era. Do you think now is the best time to start a tech company?
Sam Altman: At the very least, this is the most ideal time so far, and of course, I hope for even better times in the future. In my view, every major technological innovation greatly expands our capabilities, allowing us to do more than ever before. I firmly believe that these emerging companies will achieve extraordinary accomplishments in more fields and have a profound impact. Therefore, I think now is the best time to start a business. When the industry stagnates and lacks vitality, large companies often have the advantage. But whenever there is a technological revolution, like in mobile technology, the internet, or semiconductors, similar to the Industrial Revolution, emerging companies can stand out. So the current situation is truly exciting because we haven't encountered such disruptive changes in a long time.
Garry Tan: In your article, you made a very bold prediction that superintelligent AI will be born in a few thousand days.
Sam Altman: Yes, this is both our hope and our speculation. I can clearly see a development path, and the work we are doing will continue to accumulate results, creating a compounding effect. If we can maintain or even exceed the progress we've made over the past three years, then in the next three, six, or nine years—about 3,500 days—this system will be able to accomplish many astonishing things. I believe that even systems like o1 have already demonstrated a high level of foundational cognitive ability in specific limited tasks. o1 is undoubtedly a very intelligent entity, and I think we are far from reaching the limits of our progress.
Garry Tan: Your article is truly one of the most technically optimistic pieces I've ever read. The series of achievements we look forward to includes fixing climate issues, establishing space colonies, revealing all principles of physics, achieving near-infinite intelligence, and obtaining abundant energy.
Sam Altman: I do believe that all these things, and even more miracles that we cannot currently imagine, may not be far off. Being able to talk about these possibilities with a mix of seriousness and aspiration is incredibly exciting. One of the things I have always loved about YC is that it fosters a slightly magical sense of technological optimism and a firm belief that you can solve any problem. In a world that constantly tells people "that's impossible, you can't do it," I think this optimism can inspire entrepreneurs to look further, which is unique globally.
02 Abundant Energy + Infinite Intelligence
Garry Tan: Obtaining sufficient energy is undoubtedly a grand topic. We know there are two solutions: Path A and Path B. If we can truly achieve abundant energy, it seems to unlock almost all fields, not just knowledge-intensive work but also labor-intensive work. This can be achieved through robotics, natural language processing, and ubiquitous intelligence. It feels like we are stepping into a truly abundant era.
Sam Altman: I believe that abundant intelligence and abundant energy are the two core elements for achieving all other visions. Of course, there are many other important factors, but if we can have abundant intelligence and energy, the achievements we can accomplish in the world will be astonishing. For example, we can not only generate better ideas faster but also realize them in the physical world. Not to mention, running a large number of AI systems also requires energy. I think this will be a huge breakthrough. In fact, I don't think it will be surprising; all of this is happening simultaneously, and perhaps it is just an inevitable result of accelerated technological progress. But there is no doubt that this is an extremely exciting time and a great opportunity for entrepreneurship.
Garry Tan: So we are indeed in this abundant era. Perhaps your robots can actually manufacture anything, and almost all physical labor can be transformed into material progress, benefiting not just the wealthiest but everyone. But what if we cannot achieve an infinite energy supply? What if there is some physical law preventing us from reaching that point?
Sam Altman: The trajectory of solar energy and storage technology has been quite optimistic. Even if we don't make significant breakthroughs in nuclear energy, we can barely cope. But there is no doubt that reducing energy costs and increasing energy supply has a direct impact on improving quality of life. Ultimately, we will solve all the problems in physics. We will definitely find solutions; it's just a matter of time, and we deserve such results. One day, we may no longer talk about fusion or similar technologies but will focus on higher-level technologies, which will also be extremely exciting. For us, energy may feel abundant now, but it may still be far from enough for our descendants. And the universe is vast and rich in material resources.
03 Envisioning Running a Research Lab After Retirement
Garry Tan: You previously mentioned Paul Graham, who founded YC and brought us together. He loves to tell the story of your joining YC, saying that you were a freshman at Stanford University, and 2005 happened to be YC's first startup batch. He suggested you wait until your sophomore year, but you insisted you were a sophomore and wanted to participate. As a result, you became quite famous in our community and were seen as a powerful figure. Where do you think this determination comes from?
Sam Altman: These stories are sometimes exaggerated. In fact, I don't like to be seen as a daunting person. In many ways, I feel I don't fit that image. I just had a thought that I didn't understand why things had to develop in the existing way, so I just did what I thought was feasible from a fundamental principle. This idea might be somewhat unique. I have always thought one of the great things about YC, and something I still cherish, is that it brings together a group of unconventional people who simply say, "I want to do what I want to do." This resonates with my self-identity; I truly believe you can try to do many things or try new things, and that is largely feasible. Moreover, I think the more attempts you make, the better. It was only later that I realized that what makes YC so special, aside from great figures like Graham encouraging you and telling you "you can do it, I believe in you," is that there is a group of people around you who share the same beliefs. The biggest advice for young people is to find a peer group that inspires you as early as possible; that has been incredibly important for me.
Garry Tan: Talk about the early years of YC Research (the nonprofit research organization under YC)! You brought a very cool experimental spirit to YC. I remember you came back to share with the partners your conversations with Larry Page and Sergey Brin in some rooms. At that time, artificial intelligence had already become a hot topic, although it still had a long way to go to maturity, but it felt within reach—this was ten years ago!
Sam Altman: I have always thought that running a research lab is the coolest retirement job. When we first talked about YC Research, it wasn't specifically focused on artificial intelligence, but I always wanted to do something similar: create a lab that could fund research in different directions. Looking back, I wish I could tell you that we foresaw AI becoming mainstream from the beginning, but in reality, we also tried many unsuccessful projects. During that time, I read several books about Xerox PARC and Bell Labs, and Silicon Valley was widely discussing the need to rebuild excellent research labs. I thought that idea was cool and somewhat similar to YC: you provide funding for smart people, sometimes it succeeds, sometimes it fails, but the most important thing is to try. AI did indeed experience a small boom at that time, especially from late 2014 to early 2016. Discussions about superintelligence were very lively, and those kinds of books were quite popular. DeepMind also had some impressive achievements, but their direction was slightly different from ours. In fact, I have always been a loyal fan of AI, so I thought it was cool and wanted to try doing something. But even then, it was difficult for us to judge whether AI was ready to enter the mainstream.
04 Forming the Original Team of OpenAI
Garry Tan: How did you select and form your team during the early stages of YC Research and OpenAI?
Sam Altman: Greg Brockman (co-founder and president of OpenAI) was one of the core members who decided to join us early on. At that time, our recruitment process felt like a montage scene from a movie, rushing around looking for people who could move forward with us. I remember when I first heard the name Ilya Sutskever (co-founder and former chief scientist of OpenAI), I knew he was a very smart person. Through some video materials, I discovered he was a visionary and talented genius. His personal charm deeply attracted me, and I felt I had to meet him and have a chat. So, I emailed him, but unfortunately, he didn't reply. I didn't give up; instead, I decided to go to the conference where he was about to give a talk to try to meet him. That conversation was very pleasant, and we discussed many ideas about artificial intelligence and the future.
Looking back, one thing we did particularly well was to clearly state from the beginning that we were pursuing AGI (Artificial General Intelligence). At that time, many people were skeptical about this goal and even thought it was crazy and irresponsible to talk about it publicly. But this attitude immediately attracted the attention of the younger generation while drawing skepticism and ridicule from the older generation. However, I felt this was actually a very good sign, proving that what we were doing was meaningful. Our team was like a "ragtag army" of young people. I might have been the oldest at around 30 years old. Others were likely younger, and from the outside, people might have thought we were just a bunch of "clueless young people." But we didn't care; we were determined to do this. We each went out to find people, meet up, and form groups. After some exploration and attempts, the whole team gradually took shape. Although there were some twists and turns along the way, it took us about nine months to slowly find our direction, and then things started to progress.
I remember a special moment for OpenAI was when we announced its establishment in December 2015, but due to Sutskever's collaboration issues with Google, we actually had to wait until January 2016 to officially start working. So, around January 3, 2016, everyone returned from the holidays and gathered in Brockman's apartment, probably about ten people. We sat together, all feeling that we had done something epoch-making and were finally about to start. But then everyone's reaction was, "So what do we do now?" This was a very interesting moment. It reminded me of those entrepreneurs who, after a long effort to raise funds, finally succeed and feel they have accomplished something great. But in reality, the real test was just beginning; it wasn't time to celebrate, but rather the actual competition was just starting. We had no idea how difficult this "competition" would be. It took us a long time to figure out what exactly we should do. But one thing that impressed me greatly was Sutskever, especially him, and the entire early team were remarkable. Despite the twists and changes in the process, our initial ideas ultimately proved to be so correct and impactful.
We were in Brockman's apartment, listing our ideas and plans on a whiteboard. Then we started trying to do some things; some succeeded, and some failed. But ultimately, we now have a relatively complete system. Looking back, the process feels very crazy and incredible. We have come this far, experiencing countless twists and difficulties, but we ultimately achieved the goals we set at that time—deep learning. More specifically, we aimed to create a large unsupervised learning model and then solve the reinforcement learning (RL) problem. This direction was established from a very early offline meeting. We set three goals: first, to figure out how to conduct unsupervised learning; second, to solve the reinforcement learning problem; and third, to keep the team size no more than 120 people (which we did not achieve). However, regarding the predictive direction of the first two goals, looking back, they were quite accurate.
05 Why the Scaling Law Was Considered Heretical
Garry Tan: Next, let's talk about the issue of "scaling," specifically whether we can scale up the model size. At that time, the scaling law was seen as a heretical view, and you faced a lot of criticism for it.
Sam Altman: We firmly believed that deep learning was effective and that its performance would improve with scale. These views were quite controversial at the time and even considered wrong. While we couldn't predict the effects of scaling precisely, our intuition told us it was feasible. However, people generally believed that these neural networks were not truly "learning" or "reasoning," but merely "playing tricks." This criticism came not only from the outside but also from some leading figures in the field, who thought it could lead to another AI winter. However, we witnessed continuous improvements in model performance, which further strengthened our belief.
As we delved deeper into practice, we found that the losses from scaling were predictable, so we decided to continue pushing in that direction. We realized that learning is an important phenomenon that emerges spontaneously, even though we didn't understand all the details at the time. It was like discovering a new element in the periodic table; we wanted to explore this field further. Although we had limited resources and couldn't compete with companies like DeepMind, we decided to focus our efforts on one direction in hopes of achieving breakthroughs. This focus allowed us to avoid being overly clever and spreading ourselves too thin, leading to significant results.
Garry Tan: You created a place where the smartest people in the world could fully unleash their talents. I heard that even at that time, acquiring computing resources was exceptionally difficult. Some "elders" in the industry criticized you for wasting resources, which could lead to an AI winter.
Sam Altman: People always questioned whether we were wasting resources and concentrating them in a way that was too risky. They believed we should spread resources across multiple directions to mitigate risk. However, we firmly believed in our choices and decided to make that decision in the environment at the time. We understood that spreading resources might mean struggling to achieve breakthroughs in each direction, while focusing our efforts could yield unexpected results. Although this strategy was not widely accepted at the time, we were confident it would succeed.
Garry Tan: You're right; this also relates to the issue of focus. You have to make choices, and it's best if that choice is correct because resources are limited. Prioritization is key to increasing the probability of success.
Sam Altman: We didn't know from the beginning that language models would become mainstream. We went through many attempts and setbacks before gradually accumulating scientific understanding. If we had known everything we know now back then, we might have accelerated the entire process. But the reality is that we couldn't accurately predict the future. We set many hypotheses, including the direction of technological development, the way to build the company, and the future of AGI, all of which were filled with uncertainty. However, one of our great strengths was our ability to get back up after setbacks and keep moving forward. This mindset applies not only to scientific exploration but also to our thinking about the world and product forms. We did not anticipate that language models would become mainstream, at least I didn't know. When we started, we made many different attempts, including researching robotics, agents, video games, and so on. A few years later, GPT-3 emerged, which was not that well-received at the time.
06 Commercialization of GPT-4
Garry Tan: At that time, unsupervised learning had not yet truly realized its potential, but an interesting phenomenon was discovered: a certain neuron could achieve emotional flips, both positive and negative, which ultimately gave rise to the GPT series. Jake Heiler, the founder of CaseText, may be one of the earliest to commercialize GPT-4. He experienced the iterations from GPT-3, GPT-3.5 to GPT-4 and described the moment he obtained GPT-4 as a significant revelation. Compared to the "hallucination" issues that often arose in legal applications with GPT-3.5, GPT-4 reached a new level in handling complex prompts; as long as the prompts were sufficiently refined to the workflow, it could almost complete any task people expected. Heiler built a wide range of test cases around GPT-4 and successfully sold his company for $650 million. Therefore, I believe he is a pioneer in the large-scale commercialization of GPT-4.
Sam Altman: I remember that conversation with Heiler; it was one of the moments we realized GPT-4 truly had enormous potential. When we initially pitched GPT-3, entrepreneurs thought it was cool but didn't see enough commercial opportunities. With the emergence of GPT-3.5, especially as YC's startups began to take an interest, we felt the market's enthusiasm. By the time GPT-4 was released, customers even started directly asking us how many GPUs we could provide. This made us confident that we had a truly excellent product in our hands. In fact, from user feedback, we learned that whenever a new model was released and put into use, they would be amazed by its progress. We conducted various tests, and the results were encouraging. GPT-4 could even rhyme and tell interesting jokes, which excited our team. However, you can never judge whether a product will become a true "hit" based solely on internal testing; the real test lies with the users. Although we were internally confident about GPT-4, we still felt anxious before user testing.
07 Why Did You Think of Starting Loopt?
Garry Tan: Before creating what may be the craziest AI lab in history, you founded Loopt at the age of 19, an application similar to "Find My Friends," which was well ahead of Apple's similar feature. At that time, what inspired this idea?
Sam Altman: I was fascinated by mobile technology; phones were gradually becoming popular while I was still in high school. I realized that phones were not just communication tools; they would become portable computing platforms. The idea for Loopt stemmed from my vision of the future potential of mobile phones, and I wanted to do something fun and practical with them.
Garry Tan: Although Loopt ultimately did not achieve commercial success, that experience was undoubtedly very valuable for you. You gained practical experience in managing employees, corporate sales, and more.
Sam Altman: Indeed, Loopt gave me a deep understanding of the challenges of entrepreneurship. Although we failed to find product-market fit, that experience taught me many important lessons about entrepreneurship, platform transformation, and how to find positioning amid change. As Graham said, "Your twenties are an apprenticeship, but you don't know what you're preparing for until you actually start your own business." I have never heard of anything that provides a faster opportunity for broad learning than entrepreneurship.
Garry Tan: This reminds me of how Facebook almost missed the mobile wave because they were focused on web software, and they had to acquire companies like Instagram and WhatsApp to fill their gaps. I think you, like Elon Musk, Jeff Bezos, and many others, started your journey as founders. Some might believe that in the early stages of entrepreneurship, one should first solve funding issues before considering the craziest and most challenging projects. What are your thoughts on this?
Sam Altman: I believe that securing early funding for a startup is very helpful. I think it's quite difficult to find others to do this in the initial stages. In the early days of OpenAI, I received investments from Musk and others, for which I am very grateful. I also invested in some other projects and was very happy to support them; I think it would have been challenging for me to find others to make those investments. This funding allowed us to focus on technical research and product development without overly worrying about financial issues. I learned very valuable lessons from these experiences. However, I also feel that I somewhat wasted time when I was working on Loopt. Using the word "waste" might not be entirely appropriate, but overall, I feel that if I could do it again, I might choose to do something different. I certainly don't regret any of it; it's all part of life, and I've learned a lot.
08 What Would You Do If You Could Go Back to Age 19?
Garry Tan: If time could be reversed, giving you the opportunity to send advice to your 19-year-old self studying at Stanford through a time capsule, what would you say?
Sam Altman: This is indeed a tough question because studying artificial intelligence has always been what I wanted to do most. I entered campus to delve deeper into this field. However, in the AI labs at that time, I was often advised not to get involved in neural network research, as predecessors had tried and failed, which was quite a long time ago. I might choose a research path different from Loopt, but I can't predict the exact direction. Overall, things went relatively smoothly.
Garry Tan: For OpenAI, the past year has been quite eventful, and many achievements have been made. How do you view the series of events that occurred last fall? The team does experience changes, but how do you feel about it now?
Sam Altman: Although a bit fatigued, overall, I feel pretty good. Our progress has been rapid, as if we have crossed growth stages that typically take medium to large tech companies ten years to achieve in a very short time. It has been less than two years since the launch of ChatGPT. We have gone through many hardships along the way. Any company undergoing expansion will experience changes in management. Those who excel in the zero-to-one startup phase may not be able to handle the leap from one to ten or from one hundred to one thousand. We have also undergone many transformations, made mistakes, and made the right choices. The company's mission, whether to achieve AGI or other grand goals, is to make optimal decisions. This will bring many changes, and I hope we are entering a relatively stable phase, but I know there will be other turbulent times ahead.
Garry Tan: How does OpenAI currently operate? The quality and efficiency of your work have, in my view, far surpassed many long-established software companies in the world.
Sam Altman: This is the first time I truly feel that we have clarified our direction moving forward. I believe that from now until the establishment of AGI, although we still need to put in tremendous effort, we have a clear understanding of what we need to do. This is incredibly exciting. At the product level, there are still many mysteries to solve, but generally, we have locked in our goals and the key areas that need optimization. When you have this clarity, I believe you can move forward quickly. As long as you are willing to focus on these core tasks and dedicate yourself to doing them well, you can effectively organize around them. Our research path is relatively clear, the infrastructure path is becoming clearer, and the product path is gradually taking shape. You can make efficient adjustments around these directions. There was a time when we lacked these; back then, we were merely a pure research lab, and even if you knew these directions, it was challenging to make decisions in practice because there were always too many things you wanted to try. But getting everyone to aim for the same goal and work together is the key factor determining the speed of the company's development.
09 Leveling AGI
Garry Tan: I feel that we have recently achieved a leap from the first level to the second level of general artificial intelligence, and this transformation is powerful. Immediately after, at the o1 hackathon hosted by YC, I witnessed a truly impressive and fun sight. One of the standout winners was Camper, a startup focused on CAD/CAM (computer-aided design/computer-aided manufacturing). They built a system that can continuously iterate and optimize airfoil designs, transforming a design that was originally unable to fly into one that has significant lift. This sounds like a direct leap to the fourth level—the innovator stage, which is truly amazing.
Sam Altman: Your description is indeed thought-provoking. The point I have been conveying to the outside world is that the leap from the second level to the third level will be achieved quickly, but moving from the third level to the fourth level will face greater challenges, requiring the emergence of some novel ideas of medium to larger scale. However, some demonstration results have convinced me that as long as we can cleverly utilize the current models, we can still unlock a lot of innovative potential.
Garry Tan: Camper has successfully built the core software architecture needed for CAD/CAM and cleverly used language as an interface for interaction with large language models, allowing the model to utilize these software tools. If this is combined with the idea of code generation, then it would be an extremely bold concept—large language models not only being able to program but also creating tools for themselves and combining those tools, similar to the strong collaboration between "thinking chains" and o1.
Sam Altman: I believe that the pace of development in the future will far exceed everyone's current expectations. This is indeed an era full of infinite possibilities. The phrase you mentioned earlier, "discovering everything in physics," resonates deeply with me. I once dreamed of becoming a physicist, but unfortunately, I did not have the talent to become an outstanding physicist and could only contribute in other ways. However, I now firmly believe that one day, someone will use these technologies to solve all the problems in the field of physics. I feel incredibly excited to live in this era and eagerly await the arrival of that person who will make breakthrough progress.
Garry Tan: Could you briefly explain the concepts of the third, fourth, and fifth levels of general artificial intelligence?
Sam Altman: We realize that the term "general artificial intelligence" has been overused, and people's understanding of it varies. Therefore, we try to provide a rough sequential framework, dividing it into several stages. The first-level system is a chatbot, the second-level system is a reasoning engine, and we believe that with the release of o1, we reached this level earlier this year. The third level consists of agents that can perform long-term tasks, such as interacting with the environment multiple times, requesting help from humans when needed, and collaborating. As for the fourth-level innovators, I believe we will reach this level faster than people expect. It's like scientists having the ability to explore and understand phenomena that have long puzzled us. The fifth level is a relatively vague concept; it acts similarly but scales up to the level of entire companies or organizations. That would be an extremely powerful existence.
Garry Tan: Do you think there will be companies in the future with annual revenues reaching billions of dollars, but with employee counts possibly under 100, 50, 20, or even just 1?
Sam Altman: This indeed seems to be a possible trend. I don't know how to fully evaluate this phenomenon, but I do feel that this trend is quietly happening. At least for entrepreneurs, this is undoubtedly a fantastic opportunity.
10 Advice for Early Founders
Garry Tan: What advice do you have for those who are about to start or have just started their entrepreneurial journey?
Sam Altman: First, be keenly aware of current technological trends and bet on them. We are still far from reaching the saturation point of technology; these AI models will continue to improve at an astonishing pace. What you can do as a founder of a startup with this technology will be vastly different from what you could do without it. Large companies, medium-sized companies, and even startups that have been established for several years have relatively long planning cycles; for example, Google even engages in ten-year planning. But your advantages lie in speed, focus, conviction, and the ability to respond quickly to technological advancements. This is the biggest competitive advantage for startups right now.
Secondly, I suggest that you get hands-on and build something practical with AI. When you see a new idea, don't hesitate; implement it immediately instead of putting it into quarterly plans or long-term strategies.
Furthermore, when there is a new technological platform, some entrepreneurs might think, "Because I'm doing AI, the business rules don't apply to me; I have this magical technology, so I don't need to establish a competitive advantage or develop better products." This mindset is very dangerous. Indeed, by embracing new technology more quickly, you might achieve short-term explosive growth, but in the long run, you still need to build a product or service that can continuously provide value. Everyone can now create fantastic demonstrations, but building a successful business is the key. That is the hardest part, and business rules still apply. You can innovate faster and better than ever before, but you still need to establish a robust business.
Garry Tan: What are your expectations for 2025?
Sam Altman: I certainly look forward to the development of AGI. But beyond that, there is something that excites me even more, and that is my child. His arrival brings me immense happiness and anticipation. (Translated by Tencent Technology, special contributor Jin Lu)
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。