Bloomberg: How will artificial intelligence disrupt the way companies are organized?

CN
27 days ago

The economic system has long been built on the notion that expertise is scarce and expensive. However, artificial intelligence is about to make this expertise abundant and almost free.

Image source: Generated by Wujie AI

For most of human history, hiring a dozen experts with PhDs often required a huge budget and months of preparation. Today, you can instantly access the wisdom of these "brains" by simply entering a few keywords into a chatbot.

When the cost of intelligence becomes lower and the speed becomes faster, the fundamental assumption that supports our social system—“human insight is scarce and expensive”—will no longer exist. How will company organizational structures change when we can call upon the insights of a dozen experts at any time? How will our ways of innovating evolve? How should each of us approach learning and decision-making? The question facing individuals and businesses is: how will you act when intelligence itself is readily available and almost cost-free?

The Historical Process of "Decreasing the Cost of Wisdom"

Historically, we have witnessed multiple instances where the cost of knowledge has significantly decreased and the means of dissemination have rapidly expanded. The invention of the printing press in the mid-15th century greatly reduced the cost of disseminating written materials. Before this, texts were often painstakingly copied by professionals such as monks, which was both costly and time-consuming.

Once this bottleneck was broken, Europe experienced profound social changes: the Protestant Reformation caused significant upheaval on a religious level; literacy rates soared (laying the foundation for universal primary education); and scientific research flourished with the help of printed publications. Commercially oriented countries like the Netherlands and England benefited immensely, with the Netherlands entering its "Golden Age," while England continued to play a significant role on the global stage for centuries to come.

Over time, the widespread literacy and public education improved the overall wisdom of society, laying the groundwork for industrialization. Factory jobs became increasingly specialized, and more complex divisions of labor drove economic growth. By the end of the 18th century, countries with higher male literacy rates were the first to industrialize; by the end of the 19th century, the most technologically advanced economies were often the ones with the highest literacy rates. People acquired new skills, creating more specialized positions, thus forming a virtuous cycle that continues to this day.

The advent of the internet has pushed this trend to new heights. In my childhood, if I wanted to research a new topic, I had to take notes and go to the library to search for books, which could take up most of the day. At that time, acquiring knowledge was both expensive and difficult.

Today, artificial intelligence has taken up the baton of this millennia-long "decreasing the cost of wisdom" relay, opening a new chapter for our economy and ways of thinking.

My "Aha Moment" with ChatGPT

When I first used ChatGPT in December 2022, I felt it was a milestone product. Initially, I just used it for some "digital tricks," like asking the AI to "rewrite the Declaration of Independence in the style of Eminem" (the adapted lyrics it produced were something like, "Yo, we gotta say it loud, these folks will never be knocked down," and so on).

Looking back, it was like asking a Michelin-star chef to make you a grilled cheese sandwich—such a waste of talent. It wasn't until one afternoon in January 2023, when my 12-year-old daughter and I spent a few hours designing a brand new board game with the help of ChatGPT, that I truly realized the power of such tools.

At that time, I first told the AI which board games we liked and disliked, and asked it to analyze the commonalities. It found that we enjoyed game mechanics that involved "laying paths," "managing resources," "collecting cards," "strategizing," and had "high stakes," while we disliked certain patterns common in games like Risk or Monopoly.

I asked it to come up with some less obvious but important game ideas based on these elements, and I hoped for some historical context. ChatGPT then conceived a game called "Elemental Discoveries": players take on the roles of chemists from the 18th to 19th centuries, collecting and trading resources to conduct experiments and score points, while also being able to interfere with each other.

Then, I asked it to further refine the resources, gameplay, game mechanics, and suitable roles for players. It suggested roles like "Alchemist," "Saboteur," "Merchant," and "Scientist," matching them with historical chemists such as Lavoisier, Joseph-Louis Gay-Lussac, Marie Curie, and Carl Wilhelm Scheele.

With the then relatively "basic" ChatGPT, we created a rough but playable board game in just two to three hours. Eventually, I had to stop, partly due to time constraints and partly because I was exhausted. That experience made me realize that an AI "collaborator" can compress a development process that would normally take weeks into just a few hours. Just think about the immense potential if it were applied to product development, market analysis, or even corporate strategy!

In this process, what I saw in ChatGPT was not just a tool that repeats or piles up facts; its performance demonstrated the ability for analogy and conceptual thinking, connecting ideas with real-world references, and genuinely outputting creative solutions in response to needs.

From "Random Parrot" to "Deep Thinker"

A trillion is already an astonishing figure. The large language models that support ChatGPT often have billions, trillions, or even tens of trillions of parameters, making their complexity mind-boggling.

We still do not fully understand why and how these models work. When they achieved breakthroughs repeatedly over the past seven years, some theorists insisted that they could not produce anything truly new—some researchers even coined the pejorative term "stochastic parrots" in 2021. This is because large language models primarily predict text based on statistical patterns in training data, as if parrots were randomly repeating phrases.

However, for those who continuously experience and marvel at these tools, it is hard to believe they are merely repeating. Especially in the past six months, this viewpoint has become even less tenable.

The initial large language models were more like "speaking intuitively," lacking "reflective" capabilities and any notion of "self-awareness." In the words of Nobel laureate Daniel Kahneman, humans mostly rely on System 1 (intuitive, quick responses) for thinking, but when deep thought is truly needed, we switch to System 2 (slow, cautious, and less prone to error). Early versions of ChatGPT and its competitors mostly exhibited System 1-like behavior, lacking the reasoning processes of System 2.

This situation began to change in September 2024, when OpenAI released a reasoning model called o1, which can break down complex logical problems into multiple steps, verify intermediate conclusions (and backtrack if necessary), thus arriving at better final results. Compared to traditional large language models that can only rely on memory or superficial pattern matching, the new reasoning model gradually possesses the ability to dissect problems and think critically. Some tests have shown that this reasoning model can now rival, or even surpass, PhD-level experts in specialized fields.

Since the release of o1, AI has made astonishing progress in just six months. The hottest topic now is how to turn these reasoning models into "autonomous research assistants." Their performance is truly impressive.

Recently, I had a research robot conduct an analysis for me on the topic of "comprehensive environmental impact assessments for large events or operations such as F1 racing, Coachella music festival, Disneyland, Las Vegas casinos, hospitals, and large zoos." The AI took 73 minutes, consulted 29 independent sources, and provided a detailed results table along with a 1,916-word written explanation. Although there is still room for improvement in quality, it was roughly equivalent to a report that a graduate student would take several days to write, saving me several days of time.

Just 18 months ago, my AI system could only handle small tasks that took less than half an hour; now, it is capable of tackling more complex and time-consuming research work.

The Emergence of the "Cognitive Production Line"

We have been witnessing a series of evolutions related to "knowledge usage" and "cognitive labor." From the initial monopoly of wisdom by temples and scholars, to the printing press making knowledge disseminable, to the internet making information itself readily accessible, the focus has gradually shifted to "how to understand information." Now, those tasks we once considered scarce and complex have become close at hand and low-cost.

However, when I communicate with senior management in large enterprises, I find that they mostly use AI only in trivial areas, such as customer service automation to save costs. The CEO of Salesforce stated last December that 86% of their 36,000 weekly customer support inquiries are answered by AI; Swedish fintech company Klarna claims that two-thirds of its customer service conversations are handled by AI, which alone has brought the company $40 million in profits. However, merely cutting 10% of costs through customer service is not enough for a company to achieve a qualitative leap; no great company has succeeded solely by reducing costs.

Therefore, most companies start with relatively low-end tasks, using AI to handle work valued at $50 per hour (like customer service chats). While useful, this is far from transformation. In fact, AI is equally capable of handling tasks valued at up to $5,000 per hour—such as research and development, strategic planning, or professional consulting. Why are only a few companies currently deploying AI in these critical areas?

One reason is that people find it hard to imagine that work which "must rely on senior managers or top experts" can actually (or partially) be undertaken by machines. Because exceptional talent is scarce, those high-value tasks seem particularly precious. Our organizational structures are designed under the assumption that "the supply of truly high-IQ talent is limited."

Take the pharmaceutical industry as an example: a blockbuster new drug can often determine a company's success or failure. The bottleneck lies in pushing the drug through the expensive and time-consuming approval process—typically requiring 10 to 15 years and over $1 billion in investment, with often only one out of thousands of candidate molecules making it to market. Meanwhile, in a large pharmaceutical company, the number of marketing personnel may be thousands of times greater than that of top R&D personnel, because truly experienced research experts are extremely scarce.

At this stage, most business leaders are still in the phase of "trying to accept AI" rather than "truly believing in AI." They are accustomed to thinking that some problems are too difficult or too expensive, and they avoid them if possible. However, with the emergence of AI, the constraint is no longer "can we come up with a solution," but rather "how quickly can we implement and validate good ideas."

All of this will have profound implications. When every company can call upon several "PhD-level AI experts" at any time, the speed of innovation will naturally accelerate significantly. Just as Henry Ford's assembly line allowed for rapid iteration and improvement in production processes, AI can continuously refine and update ideas and solutions, enabling companies to learn faster, experiment more quickly, and pivot swiftly.

Of course, if a company lacks the ability to implement the ideas proposed by the AI "think tank," then even the most brilliant ideas are of no use. The key to truly differentiating lies in the ability to execute and integrate smoothly.

My Daily Life with AI

Over the past 18 months, I have gradually built an "AI ecosystem" to work for me. For example, on a day in June 2024, I called upon these AI systems 38 times, with a total interaction word count reaching 79,000 words for research purposes.

By January 2025, I no longer kept track of the word count in my communications. However, without any objections from the other party (a real person), I almost always brought an AI to take meeting notes in every meeting. In my daily research, I often use several different AI tools. Just in the week I wrote this article, I made at least 144 queries to various large language models—this does not include 26 instances of audio transcription and the use of coding assistant tools. I now use the new generation of AI tools more often than I use Google Search.

Surprisingly, even though the volume of work I handle has increased and the speed has accelerated, the time I spend in front of my computer screen is less than in previous years, which is a very happy outcome for me.

When the cost of intelligence approaches zero, the real bottleneck is no longer "how to acquire brains," but rather "how we can make good use of them." Individuals and organizations that can ask good questions, objectively evaluate answers, and take decisive action will become the big winners. They also need to consider: with more time on hand, what should they do with it?

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

ad
Bybit: $50注册体验金,$30000储值体验金
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink