Sam Altman Davos Roundtable: On AGI, New York Times Lawsuit, and Board Turmoil

CN
巴比特
Follow
1 year ago

"No matter how artificial intelligence develops, humans will still have the ultimate decision-making power over the world."

Image Source: Generated by Wujie AI

As a prominent figure in the field of artificial intelligence, Sam Altman, CEO of OpenAI, participated in a roundtable discussion at the World Economic Forum in Davos, Switzerland on January 19, with a group of heavyweight business figures, discussing the theme "Technology in a Turbulent World."

Source:Youtube

Below are Altman's views on a series of questions, with some deletions for ease of reading:

Question: I think most people's concerns about artificial intelligence can be divided into two types. First, will it end the human race as we know it? Second, why can't artificial intelligence directly help me drive?

Sam Altman: A very good sign about this new tool is that even though its current capabilities are very limited and it has significant flaws, people are finding ways to use it to improve productivity or other aspects of returns, and understand its limitations - so it's a system that is sometimes accurate, sometimes creative, but also potentially completely wrong. We would never trust it to drive a car, but we are very happy to use it for brainstorming, writing articles, or checking code.

Why can't artificial intelligence drive? There are actually great autonomous driving systems near San Francisco (such as Waymo), and people really like them. What I mean is that models like OpenAI perform well in some things, but not in life-or-death situations.

Artificial intelligence has been demystified to some extent because people are really using it now. I think this is always the best way to drive the development of new technologies.

Question: People are concerned about trusting the capabilities of artificial intelligence. To what extent can we say, "I can really let artificial intelligence do this, whether it's driving, writing papers, or filling out medical forms?" Or do we have to trust the black box to some extent?

Sam Altman: I think humans are very tolerant of mistakes made by other people, but very intolerant of mistakes made by computers. I guess for those who say "autonomous cars are already safer than human-driven cars," people will only accept it when (autonomous cars) are 10 to 100 times safer than human-driven cars, or even more…

In a sense, the most difficult part is when it is correct 99.999% of the time, you will relax your vigilance.

(I can't actually get into your brain, look at 100 trillion synapses, and try to understand what happened in each synapse… but I can ask you to explain your reasoning to me… I can decide if it makes sense to me.

Our artificial intelligence systems will also be able to do the same thing: they will be able to explain to us in natural language the steps from A to B, and we can decide if we think these steps are okay.

Question: If artificial intelligence can surpass human analytical and computational abilities, what else can humans do? Many people say this means we will only have… our emotional intelligence… do you think artificial intelligence can do better than us in this regard?

Sam Altman: Chess was one of the first "victims" of artificial intelligence - Deep Blue defeating Kasparov was a long time ago. All the commentators said, "Computers can beat humans, this is the end of chess. No one will bother to watch or play chess anymore." (But) chess has never been as popular as it is now. It would be a big deal if (players) cheated with artificial intelligence. No one, or almost no one, would watch a match between two artificial intelligences.

I admit (the AI revolution feels different from past technological disruptions). General cognition is very close to the human traits we value… so everyone's work will be different… we will all work at a higher level of abstraction, and we will all gain more capabilities. We will still make decisions; over time, these decisions may tend to be more strategic, but the decisions about what should happen in the world will still be made by us.

Question: You have always had a relatively mild attitude towards artificial intelligence. But people like Elon Musk, sometimes including Bill Gates, and other very smart people… are very, very worried. Why do you think they are wrong?

Sam Altman: I don't think they are necessarily wrong… this is obviously a very powerful technology, and we cannot say for sure what will happen. This is true of all major technological revolutions. But we can imagine that this technology will have a huge impact on the world, and it could go wrong.

The technological direction we have been working hard to promote is the direction we believe can ensure safety, which includes many things. We believe in iterative deployment. We push this technology into the world… to get people used to it. From a societal perspective, we have time, or institutions have time, to discuss how to regulate, how to set up some protective measures.

Question: Technically, can you set up protection for artificial intelligence systems? Is this feasible?

Sam Altman: If you understand the progress from GPT-3 to GPT-4, and see to what extent it can be consistent with a range of values, you will know that we have made huge progress in this regard. Now, there is a problem that is even more difficult than the technology, which is "who decides what these values are, what the default values are, and what the boundaries are? How does it work in this country and that country? What can I do, what can't I do?" This is a major social issue, and one of the biggest issues.

But from a technical point of view, we still have reason to be optimistic, although I think the alignment technology (or multiple technologies) we have now will not expand all the way to more powerful systems, so we need to invent new things. I think it's a good thing that people are afraid of the shortcomings of this technology, that we talk about it, and that we and others are required to meet high standards.

I feel the general discomfort that the world feels about companies like ours… why is our future in their hands? And… why are they doing this? Why did they do this?… Now the whole world thinks that the interests here are so great that we should do this.

But I think we have a responsibility to figure out a way to get opinions from society, understand how we will make these decisions, not only to understand what the value of the system is, but also to understand what the safety threshold is, and what kind of global coordination we need to ensure that what happens in one country does not have a super negative impact on another country.

Question: How do you view the lawsuit from The New York Times… shouldn't the people who wrote these articles be compensated?

Sam Altman: We hope to train on The New York Times, but this is not a top priority. In fact, we don't need to train on their data. This is something people don't understand - any specific training source will not have a great impact on us.

We hope to work with content owners like The New York Times, and make deals with many other publishers - over time, we will make deals with more publishers - when a user asks, 'Hey, ChatGPT, what happened at Davos today?' we hope to show content, links, display the brands of The New York Times, The Wall Street Journal, or any other excellent publications, and answer 'This is what happened today, this is real-time information,' and then we hope to pay for it, increase traffic. But this is only to display information when users query, not to train the model.

Now, we can also use it to train models, but this is not a top priority… I expect one thing that will start to change is that these models will be able to get less, higher-quality data during training, and think and learn harder. You don't need to read 2000 biology textbooks to understand high school biology. Maybe you need to read one, maybe three, but 2000… definitely won't help you. When our models start working this way, we won't need the same amount of training data anymore.

However, in any case, we hope to find a new economic model that applies to the entire world (including content owners)… If we are going to use your textbooks and lesson plans to teach others physics, we hope to find a way for you to be compensated. If you can teach our models, if you can help provide feedback, then I am happy to find new models for you, to reward you based on your success.

So, we really need a new economic model. The current conversation is a bit focused on the wrong level. I think the meaning of training these models will change significantly in the coming years.

Question: You were involved in one of the most well-known boardroom scandals in recent decades. What did you learn from it?

Sam Altman: Sometimes, you just have to laugh. It's so absurd…

We know that our board has become too small, and we also know that we don't have the necessary experience. But in many ways, last year was crazy for us, to the point where we overlooked this.

But I think the more important point is that as the world gets closer to AGI, the risks, pressures, and tensions will all increase. For us, this is just a microcosm of it, but it may not be the greatest pressure we face. I've observed for a while that as we take each step towards powerful artificial intelligence, everyone's role gets 10 points crazier. It's a very stressful thing, and it should be, because we are trying to be responsible for very high risks.

The best thing I've learned throughout the process so far is the strength of our team. When the board asked me if I would consider coming back on the second day after firing me, my initial reaction was "no," because many things made me very (frustrated). Then I quickly regained my sanity, and I realized that I didn't want to see the value that the outstanding employees who had devoted their lives to the company and customers had built, destroyed. But I also knew… the company would be fine without me.

Original Author: Deborah Yao

Article Source: https://aibusiness.com/nlp/from-davos-sam-altman-on-agi-the-nyt-lawsuit-and-getting-fired

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink