"AI Deceleration, Web3 Acceleration": Public Opinion, Discourse, and Politics

CN
1 year ago

Today, Binance founder Zhao Changpeng pleaded guilty and officially resigned as CEO of Binance. The internal struggle at OpenAI, does the founder's resignation not represent the greatest decentralization?

Author: Peng SUN, Foresight News

Last weekend, the entire tech industry was dominated by the internal struggle at OpenAI, overshadowing SpaceX's second launch. From CEO Sam Altman's dismissal to pressure from OpenAI's investors on the board and Altman's attempt to return to OpenAI, to Monday when Sam Altman announced his joining of Microsoft, it seemed like the drama had ended. However, the internal struggle at OpenAI took another turn when Sam Altman returned to OpenAI as CEO and formed a new initial board consisting of Facebook's CTO Bret Taylor (chairman), former chief financial advisor of the Obama administration Larry Summers, and Facebook's first CTO Adam D'Angelo. At the same time, OpenAI co-founder Greg Brockman will also return to OpenAI. On an explosive Wednesday, AI truly saved face for the cryptocurrency circle, otherwise today the cryptocurrency circle would only be focused on Binance and Zhao Changpeng's affairs.

"Back to the beginning, the memory of your youthful face." The concealment of the truth about the internal struggle at OpenAI by the board and Sam Altman has prevented us from knowing the fundamental reasons for the dismissal. A struggle for direction? Poor communication? I don't know, and I'm not interested. Instead of speculating about the "truth" behind it, it's better to focus on the discussions of some Web3 practitioners from a postmodern perspective and observe the discourse and politics behind these discussions. Because to a large extent, the construction of discourse obscures too many details, and only discourse that benefits oneself will be included in the narrative. But people and events are complex, not something that can be encompassed by simple ideologies.

When the announcement of the dismissal of the top tech companies was made, public opinion overwhelmingly sided with the so-called soul of OpenAI, Sam Altman and Greg Brockman, and the actions of the investors also indicated Altman's importance to OpenAI. This internal struggle has been shaped by the media and public opinion as a "coup," with Chief Technology Officer Ilya Sutskever being seen as the mastermind behind the "coup." In the world of AI, Web3 was not idle, with Web3 practitioners such as former Coinbase CTO Balaji Srinivasan, Coinbase co-founder and CEO Brian Armstrong, and a16z general partner Sriram Krishnan vigorously advocating for "decentralized AI," rejecting effective altruism and deceleration. In this situation, a new opposition and discourse have quietly emerged, with AI decelerating, while Web3 is constructing itself as representing progressive acceleration and the future of human history.

I. "Creating Enemies": The Discourse Politics of Web3 Effective Accelerationism

After the news of Sam Altman's dismissal from the OpenAI board, I noticed the disdain of Web3 effective accelerationists represented by Balaji Srinivasan and Brian Armstrong towards the OpenAI board. In their discourse system, effective altruism and AI doomsday theories are the antithesis of effective accelerationism. "Good" and "bad," "centralization" and "decentralization," "AI" and "Web3," "nation" and "denationalization" all form a complete set of discourse expressions. In fact, "centralization," "authoritarianism," "dictatorship," "grassroots leadership," and "deceleration" are the moral foundation of Web3, while "acceleration" representing progressiveness gives natural legitimacy to the revolution of Web3.

What is effective accelerationism (Effective Accelerationism, generally abbreviated as e/acc on the internet)? Here, I have selected a widely accepted definition, which more often points to right-wing accelerationism:

Effective accelerationism is a belief based on the second law of thermodynamics, that the universe itself is an optimizing process of creating life and constantly expanding. The engine of this expansion is technological capital. This engine cannot be stopped. The wheels of progress will only turn in one direction.

This school of thought is represented by Nick Land, and it inspired the American neoconservative movement. Yes, they are conservative, not radical, despite embracing modern technology, they deny the universal values of democracy, human rights, and freedom.

Effective accelerationism venerates natural laws, believing that technology, like the natural world, can increase entropy infinitely, and denies that this is an ideology and movement, but an acknowledgment of truth. In their view, technological capital (Technocapital, proposed by Viviana Rojas in 2012) is an unstoppable physical process.

Effective accelerationism is a blow to technocratic control, a blow to doomsday theorists and decelerationists, who want us to consume less free energy and create less entropy. Top-down control will only reduce the dimensions of civilization.

Effective accelerationism overlooks one thing, that this world is constructed by people, and civilization is created by people, and the essence of civilization is negative entropy, constantly reducing entropy. A mature civilization means establishing a social order, which is inevitably stable, incorporating uncontrollable and uncertain things into a controllable order, or compressing them into the darkest corners of society. Since the birth of modern science, human society has accelerated the secularization process, and scientific knowledge has triumphed over the agnosticism of religion. Because science cannot be falsified within its own boundaries, it also means that scientific knowledge is accessible to humans. Civilization is the same. In other words, the logic of effective accelerationism faces philosophical challenges.

Effective accelerationists are often thoroughgoing tech elites, holding the core discourse and power of this era and the future. In their view, humans have been able to adapt to every industrial revolution and technological change, so humans will also adapt to AI. Accelerate! Accelerate! Accelerate! This is the slogan of effective accelerationists, and anything that hinders unlimited technological progress should be opposed. For Web3 effective accelerationists, "acceleration" must also conspire with "decentralization." We know that any political construction is storytelling, and different political struggles are essentially confrontations between different "stories" (political narratives). When a certain story and its discourse system are widely accepted, power also arises. Effective accelerationists equate AI doomsday theories and decelerationism, shaping them as their opposition, an other different from their own ideology, to tell their political narrative. National states are destined to decline, and the political narrative of encryption and decentralization is to become the absolute truth and the end of history.

At the outbreak of the OpenAI incident, reasonable speculation about security and corporate governance structures became the handle of effective accelerationists. Their political narrative of "decentralization," "denationalization," and "decentralized governance" catered to the "eccentric," "dictatorial," and "opaque" governance structure that the OpenAI board was questioned about.

On November 18, Balaji's first tweet about OpenAI wrote: "Doomsday theorists ultimately trust centralized nation-states, just as liberals believed in the Patriot Act after 9/11; while effective accelerationism trusts decentralized networks," "Fundamentally, doomsday theorists believe that Harris can regulate superintelligence. At the same time, effective accelerationism believes that the technical experts who create the models are more capable of mitigating any negative impact than nationalists who don't even understand the models and hate technology." Here, Balaji believes that the technical experts in the field of AI and the national states that "regulate" should be in opposition, and the field of AI can self-regulate without the intervention of the state.

Following this, Balaji attempted to establish an alliance between AI and Web3. He claimed, "If the board can treat Sam like this, they can treat any OpenAI customer like this. AI needs to be decentralized." He also stated, "If it turns out that fanatical AI doomsday theorists are causing a new generation of talent like GDB to leave, then every valuable AI organization will immediately remove the doomsday theorists from positions of control." The OpenAI board has been portrayed as "fanatical AI doomsday theorists," and he expects all AI companies to be moral saints in opposition to them, "eliminate them before they destroy you."

In Balaji's view, not open-sourcing OpenAI is an evil, and only open-source language models like "Llama 2" developed by Meta are absolutely correct: "The only way out is acceleration + decentralization," Balaji gave the direction and deepened the opposition to "nationalism": "If you are in the AI field, turn to decentralization. Everyone in the field of AI has learned a lesson: (a) sovereignty over the company; (2) decentralization of technology; (3) independent of nationalist ideology. To effectively accelerate, we need decentralization! Now, everyone sees the risk of single points of failure. If the board can treat Sam like this, they can treat any OpenAI customer like this. Therefore, AI needs to be decentralized."

As the author of "Network State," Balaji once again began his performance, stating, "The logic of AI doomsday theorists is the logic of smart bomb clients. Anything that reduces p(doom) is good. For example, banning computation, bankrupting companies, or blowing up data centers. These radical Luddites (referring to the people who participated in the destruction of machines among the British handloom weavers in the early 19th century) will not stop at destroying their own AI companies. They also want to control your company. Like many ideologies, AI doomsday theory has evolved into something similar to communism. They need to control AI worldwide, and thus control computing worldwide, and then control the world, in order to 'save' the world. Surprisingly, their overall plan is to have Harris responsible for saving us from superintelligence."

After the political slogans were shouted, Balaji turned his attention to another label of OpenAI, "non-profit organization": "Non-profit organizations have ruined San Francisco, and now they have ruined the best company in San Francisco. They are truly non-profitable. Yes, there are some principled exceptions. Like the EFF. It does a good job. But we need to reconsider the problem of the proliferation of non-profit organizations." Yes, for the sake of their political narrative, they also denied the "idealistic" original intention of OpenAI as a non-profit organization. Web3 no longer talks about the "idealism" of OpenAI as a non-profit organization, no longer extols the ideal of "benefiting humanity," just casually saying an irresponsible sentence: "It has changed." Yes, it has changed, either for the better or for the worse, either this or that. Non-profit organizations are not important, how much they have changed is not important, how they have changed is not important, what is important is that they have changed, and the influence of discourse lies in this.

What's more interesting is that Balaji's utilitarianism seems to have led him astray, and his goal is only "network state." Twitter has become the tool in Balaji's mouth to resist another sovereign state, and he excessively praises Musk, forgetting that he is the biggest dictator, "Decelerationists have a nation, but we have a network."

"In the past year, the network has made significant progress in reclaiming social media, cryptocurrency, and AI from the hands of the state. First, Musk acquired Twitter, fired the Awakener, and removed Washington, D.C.'s core control over social media. Then, citizen journalists exposed FTX, destroying Washington, D.C.'s ideal control point over cryptocurrency. Now, effective altruism is also in turmoil, disrupting their critical cultural control point over AI. Therefore, we have the opportunity to fight for freedom of speech, decentralized currency, and open-source AI." Balaji said, "In everything, Twitter is crucial," "Even if the FBI requests Twitter for review, Musk does not review Twitter; the encrypted community on Twitter exposed FTX, even if Congress absolves SBF; effective accelerationism on Twitter condemns effective altruism, even if effective altruism tries to undermine AI through executive orders."

Balaji is right, the widespread dissemination of the OpenAI incident is inseparable from Twitter. He and the tech industry celebrity Xiangma both pointed out from the perspective of happening studies another catalyst for the creation of public opinion - the domain of Twitter where public opinion occurs. Yes, without the centralized and dictatorial Twitter, how could there be the "network" that Balaji speaks of to seize power from the state?

These are the translations of the provided text.

Why is "effective altruism" so despised by these Web3 big shots? This naturally stems from SBF and his FTX empire. SBF deeply believed in effective altruism during his college years, "making the most money, donating to more people." But the collapse of FTX had a huge impact on the entire cryptocurrency industry, "hate a person, hate his faith," and "effective altruism" has since become "infamous." In Brian Armstrong's view,

"If this is really OpenAI's effective altruism, decelerationism, AI security coup… all of OpenAI's outstanding employees should resign and join Sam/Greg's new company (if they have one). This time, skip the non-profit board in power, remove decelerationism/effective altruism, maintain founder control, avoid meaningless regulation, just focus on construction, acceleration. You are creating a beautiful new world, don't feel guilty, seize it for your own motives."

"This decelerationist thinking destroyed Google, and OpenAI used to be the antidote to Google (if true, it has now fallen under the same forces). Effective altruism has destroyed a lot of value in cryptocurrency, and now it has destroyed artificial intelligence. San Francisco is also part of the problem. Many problems can be traced back to Marxist thought. You must eliminate this risk from the company - ironically, this is the real security risk they should be concerned about."

As contradictory as Balaji's self-contradiction, Brian Armstrong also has a grand theory, believing that the existence of the OpenAI board is not for the purpose of making money or maximizing shareholder value, which may be a "mistake."

In addition, a16z general partner Sriram Krishnan also holds the same view: "Effective altruism may be the villain that many of us should pay attention to here."

Web3's effective accelerationism uses highly antagonistic "discourse" to express its political demands, but it obscures the diversity and complexity of many people and things.

II. AI has no borders? OpenAI has a "nation"

In this discourse construction, one of the sources of Web3's discourse construction in decentralization and effective accelerationism is the consideration of AI security in the AI industry. AI security involves modern state governance, technological capitalism, and biopolitics. In modern state governance, any technological issue naturally has to cater to the will of the state, so the discourse construction of effective accelerationists cannot be separated from the overall national agenda.

This article does not go back too far, but since 2021, the Biden-Harris administration has been committed to controlling and managing AI risks. OpenAI has played an important role in this agenda, promoting the establishment and implementation of the U.S. AI risk management framework.

As early as June 2019, OpenAI responded to the National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce on how to develop U.S. and international AI standards, and pointed out that it would pay special attention to "reliable, robust, and trustworthy" systems, hoping to improve the predictability, robustness, and reliability of AI systems. OpenAI believes that AI security includes "ensuring that systems do not cause physical harm to humans when interacting with them; ensuring that systems verify key decisions related to actions they are about to take with humans; ensuring that AI systems' recommendations respect human constraints and the larger environment in which the system operates." They advocate for the U.S. to establish international AI technical standards, "prepare regulatory infrastructure for large-scale, multi-purpose AI systems."

After the Biden administration was formed, the U.S. National AI Research Resource Task Force was established in June 2021 to help create and implement a national AI research resource blueprint. Subsequently, NIST held an AI Risk Management Framework (AI RMF) workshop, and AI RMF was officially put on the agenda. In March 2022, NIST sought comments on the draft AI RMF. At this time, OpenAI commented on the initial draft of the NIST AI Risk Management Framework, stating that it would respond to NIST's requirements and pointed out that NIST should consider incorporating consistency with human values and intentions into its guiding principles, ensuring that AI systems consistent with human values and intentions can reliably act in accordance with human intentions.

The birth of GPT-4 in November 2022 further accelerated the Biden administration's risk management in the field of AI. On January 25, 2023, the U.S. National Institute of Standards and Technology (NIST) officially released the AI Risk Management Framework (RMF) to enhance the credibility of AI and ensure that organizations can think about AI and its risks in different ways. In March, NIST launched the "Trusted and Responsible AI Resource Center" to promote the implementation of AI RMF.

In February, the Biden administration issued an executive order on promoting racial equality, which stated that it would "promote scientific fairness, eliminate bias in the design and use of new technologies such as artificial intelligence," and protect the public from algorithmic discrimination. Subsequently, the U.S. Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and the Civil Rights Division of the Department of Justice issued a joint statement, deciding to oversee the development and use of automated systems, and pointing out that automated systems may promote illegal discrimination and violate federal laws in terms of datasets, opacity, design, and use.

On May 4, Harris met with the CEOs of four technology companies, Alphabet, Anthropic, Microsoft, and OpenAI, to emphasize responsible AI innovation and reduce its risks before seizing opportunities. On May 23, the Biden-Harris administration issued an announcement and updated the AI research and development plan that has not been changed since 2019 - "National Artificial Intelligence Research and Development Strategic Plan: 2023 Update." The roadmap will promote responsible AI research, development, and deployment, and once again emphasizes the need to manage its risks before seizing AI opportunities. At the same time, the federal government's investment focus will be on promoting responsible American innovation, serving the public interest, protecting people's rights and security, and upholding democratic values in AI research and development. In this document, the U.S. National Science Foundation also announced a $140 million grant to establish seven new National AI Research Institutes, bringing the total number of national institutions to 25.

In June, Biden held a roundtable discussion with experts in the field of AI and its prospects and risks.

In July, Harris convened consumer, labor, and civil rights leaders to discuss the related risks of AI.

Subsequently, Biden convened seven companies including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, and announced that the Biden-Harris administration had obtained voluntary commitments from these companies to help achieve the development of safe, reliable, and transparent AI technology. They committed to:

  • Ensuring its safety before releasing products to the public;
  • Building systems that prioritize safety;
  • Earning the public's trust.

On October 30, Biden signed an "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," explicitly stating the inclination to restrict Silicon Valley from releasing products before comprehensive testing of AI products.

Based on Biden's executive order on October 30, Harris announced during her visit to the UK on November 1 that the U.S. is establishing the U.S. AI Security Research Institute within NIST through the Department of Commerce to implement the NIST AI RMF. At the same time, the U.S. has also released the first draft of the policy guidelines for the use of AI by the U.S. government, including the "Artificial Intelligence Rights Act Blueprint," AI RMF, a political declaration on the responsible use of AI and autonomy for military purposes, initiatives launched by AI-related charitable organizations, prevention of AI fraud, and international standards for content authentication.

The Biden-Harris administration's agenda on AI involves various aspects such as AI security, bioweapon risks, national security, cybersecurity, privacy, bias, civil rights, algorithmic discrimination, criminal justice, education, workers' rights, and research. However, Web3 effective accelerationists have chosen to selectively ignore the complex and diverse global issues by treating them all as objects of national narrative, where the denial of the state, centralization, and authoritarianism can be used to avoid the noise of the secular world by establishing a "network state" in a neutral territory once and for all.

III. The Hidden Truth: OpenAI's "Doomsayers" and "Accelerationists"

Take off the filter and see the real people. According to reports, more and more AI industry insiders are speculating on the probability of catastrophic consequences of AI for humanity and calculating the "p(doom)" of doomsday. So, as the former CEO of OpenAI, is Sam Altman an effective accelerationist? In the discourse of Web3 effective accelerationists, Sam Altman is the representative of "accelerationism," and the "board that benefits all humanity" can only be seen as "effective altruists" and "decelerationists."

Mira Murati, the CTO of OpenAI and leader of the DALL-E and ChatGPT teams, openly stated in an interview with Time magazine in February of this year:

We are even somewhat uneasy about the emergence of ChatGPT. I am curious about where it will begin to be practical for people, rather than just novel and purely curious;

There are many ethical and philosophical issues about social impact that we need to consider. It is important that we bring in different voices, such as philosophers, social scientists, artists, and humanists.

[AI] can be abused, and the next question is how to manage the use of this technology globally in a way that is consistent with human values?

Although AI has brought a lot of uncertainty, Mira Murati is generally optimistic.

Causing panic for most people is the public letter calling for a pause of at least 6 months in the training of a more powerful AI system than GPT-4, which was advocated by Musk and others in March 2023. Perhaps the development of AI technology seems to have exceeded the expectations of these tech elites.

Ilya Sutskever, the Chief Scientist of OpenAI who is actually engaged in technical research and development, once said, "At times, some malicious people can easily cause huge harm with the models they provide to anyone willing to pay."

In the documentary "Ilya: The AI Scientist Shaping the World" released by The Guardian in November, Ilya raised questions about whether AI technology would be beneficial to humanity. The video first showed Ilya's concerns: "AI will make cyber attacks more extreme, create fully automated AI weapons, and stable autocratic regimes." Therefore, for him, "the beliefs and desires of the first batch of AGI are particularly important, and it is particularly important to program it correctly. If this is not done, the nature of evolution and natural selection will tend towards systems that prioritize their own survival," "not because they will harm humans, but because they will be too powerful." Ilya used the example of how humans treat animals to make a comparison.

Humans do not seek the consent of animals when building highways, and the relationship between humans and AGI is similar. AGI is truly autonomous and operates on its own behalf. Many machine learning experts are not aware that the computing speed of neural networks and AI will increase by several orders of magnitude in the coming years, and they do not have enough time to ensure that the AGI they build will deeply care about humans.

By simply thinking more about the future of humanity, Ilya has become an AI doomsayer who is to be eliminated. However, he has never said that AI will destroy humanity. He is simply reflecting on the dangers that the power of AI can bring. The image of "AI doomsday" has to be attributed to the documentary made for Ilya by The Guardian - an obvious sense of doomsday.

As an accelerationist, Sam Altman also has inherent legitimacy in the discourse construction of Web3 effective accelerationists. As the founder of the Web3 project Worldcoin, he is also the most likely target for Web3. But does Sam Altman fully believe in effective accelerationism? Not entirely. His thoughts also include deceleration thinking based on human care.

In February of this year, Sam Altman published "Planning for AGI and beyond." In this article, Sam Altman acknowledges the "serious risks of misuse, serious accidents, and social chaos" that AGI will bring, and also states that AI needs continuous iteration to address current issues and ensure that humans can gradually adapt through "gradual transition":

"Currently, we believe that the best way to address the challenges of deploying artificial intelligence is through a fast learning and carefully iterative tight feedback loop. Society will face major questions about what AI systems can do, how to eliminate bias, and how to deal with job losses. The best decisions will depend on the path taken by the technology, and like any new field, most expert predictions so far have been wrong."

Sam Altman is also a loyal advocate of open source, democracy, and decentralization:

"We believe that more use of artificial intelligence in the world will bring benefits, and we hope to promote the development of artificial intelligence (by putting models into our API, making them open source, etc.). We believe that democratized access will also bring more and better research, decentralized power, more benefits, and more people contributing new ideas."

But this is not all of Sam Altman. He pursues technological progress and does not deny the importance of "safety":

"It is important that we believe that progress in the safety and capabilities of artificial intelligence often needs to be made simultaneously. Separating them is a mistaken opposition; they are related in many ways. Our best security work comes from working with the most powerful models. Nevertheless, it is important to increase the proportion of progress in safety and progress in capabilities."

He states that OpenAI is establishing an incentive mechanism to keep it consistent with good results. Specifically:

  • Assisting other organizations in improving safety;
  • Setting a limit on returns to shareholders;
  • Cancelling shareholder obligations when safety is needed;
  • Sponsoring the world's most comprehensive universal basic income experiment;
  • Accepting independent audits before releasing new systems;
  • Agreeing to limit the rate of computational growth used to create new models;
  • Establishing public standards for when AGI work should stop training, when models can be safely released, or when models can be withdrawn from production use;
  • In-depth understanding of training of a certain scale by major governments around the world;

In the conclusion of the article, Sam Altman states that "the future of humanity should be decided by humans themselves, and it is important to share information about progress with the public." AGI has the ability to continue to accelerate, and at that time, "the world may be very different from today, and the risks may be very great." He believes that "slower takeoff is easier to ensure safety, and coordinated efforts to decelerate AGI at critical moments are likely to be very important (even in a world where we do not need to do so to solve technical adjustment problems, deceleration may be important because it gives society enough time to adapt)."

Another example is that in March of this year, Sam Altman stated in an interview with ABC News that AI will reshape society, but he is "a little afraid" of the current development. He believes that regulators and society should be as involved as possible in ChatGPT, and says he will maintain "regular contact" with government officials. Altman said, "I am particularly concerned that these models may be used for large-scale misinformation" and "aggressive cyber attacks." He is never worried that AI models will make decisions and plan to rule the world without humans. "This is a tool largely controlled by humans," and the concern is who will control AI. Altman still hopes to continue developing increasingly powerful systems, integrating them into daily life and the economy, and "becoming an amplifier of human will." In Altman's view, the reasoning ability of GPT-4 is still insufficient, and the reason he chooses to further develop GPT-5 may be to continue to improve and build a reasoning engine controlled by humans. Altman believes that technological progress in AI will not diminish the meaning of society and humanity.

In May of this year, Sam Altman spoke at the U.S. Senate Judiciary Subcommittee, pointing out that AI can solve humanity's biggest problems, but also warning that the power of AI is enough to change society in unpredictable ways, and "government regulatory intervention is crucial to mitigate risks." Altman said, "What I am most concerned about is the significant harm our tech industry can cause to the world," and "if this technology goes wrong, it could be very bad."

In fact, through the analysis of the statements of Mira Murati, Ilya Sutskever, and Sam Altman, we can see that Ilya is not an AI doomsayer. He is simply expressing his thoughts on the future of humanity from the perspectives of technology and the humanities. Sam Altman also acknowledges the corresponding regulation, safety, deceleration, and altruism. It is obvious that Balaji, Brian, and others are constructing a false narrative of internal strife within OpenAI and a false narrative of Sam Altman that aligns with their ideology for their own purposes. AI is meant to make human life better, but effective accelerationists can only express their "purpose" and "means" through the political narrative of Satoshi Nakamoto.

The protagonist of the story, Sam Altman, also firmly believes that AI is not an all-knowing and all-powerful god, but a tool controlled by humans. How idealistic his Worldcoin is, a decentralized project still aims to achieve the ideals of Bitcoin and ensure universal basic income. But Web3 effective accelerationism is accompanied by the arrogance of the elite, who monopolize capital (Coinbase, a16z), cryptographic narratives (decentralization), and discursive politics (denationalization). Perhaps they are the true dictators.

Conclusion

In the eyes of AI scientists, talent is the purpose. But in the eyes of Web3 effective accelerationists, "acceleration" is the means, and "decentralization" is the purpose.

It is clear that, in order to create a revolutionary object, Web3 effective accelerationists only need to proclaim "acceleration," "infinite technological progress," and "decentralization." In the linear, progressive time they follow, technology is the subject, and humans are the "other" that needs to adapt. Lacanian discursive politics indicates that technology will eventually merge with capital, and the discourse power of technological progress will always be in the hands of a few elites. As the Silicon Valley bible, "effective accelerationism" indeed does this. They are constructing a discursive politics, not the monster that Zizek talks about, because monsters are non-human, and this world is still built on universal humanity, making it difficult to destruct and rebuild the world as destructively as Hitler did. Because of narrative, there is discursive politics.

In the eyes of effective accelerationists, any factor that hinders acceleration should be opposed; therefore, for Web3 effective accelerationists, all factors that hinder decentralization should be opposed. Whether you are idealistic, non-profit, or altruistic, as long as you do not fit the narrative of "acceleration" and "decentralization," you are to be thrown into the trash and scorned (here, acceleration may have become a political tool). They aim to prove that Web3 is the teleological endpoint by constructing an "effective altruistic" and "decelerating" AI. In the eyes of Web3, despite AI practitioners creating disruptive GPT-4, they are still carbon-based life forms living in tradition, and the Crypto and Web3 that dismantle centralized structures and nation-states are the ultimate answer. But apart from Bitcoin, DeFi, and blockchain, they are facing narrative exhaustion and can only choose to unite with AI. Cryptography has created its own enemies from the beginning, and if it cannot be decentralized and continues to create opposition, it loses its own legitimacy. In this world, the "self" always needs an opposing "other" to prove its own existence.

Effective accelerationism boasts that it represents construction and progress, while subtly shaping "deceleration" as stagnation or even regression. But perhaps it is not so, because deceleration is still moving forward. They have downplayed a truth for their own purposes, which is that the technological development of AI cannot be stopped. AI scientists have never denied acceleration; history is accelerating, but humans need a certain deceleration to think about how to coexist with machines on this earth. This is also the puzzling aspect of Web3 effective accelerationism, which is committed to "acceleration + decentralization," as acceleration itself diminishes the meaning of humanity, yet they demand decentralized governance to solve human problems. Acceleration pursues efficiency, but decentralization is inefficient. Here, it is worth quoting a comment from a Twitter user on Balaji's tweet:

"Wanting to accelerate and decentralize at the same time, a strange contradiction."

Image

Ultimately, they bring the narrative back to the means of "decentralization" and never truly consider the fundamental issue of "humanity." The few effective accelerationists in the world continue to create new history, and in this universal and homogeneous time, the whole world is accelerating with them. Those who cannot keep up will be eliminated, with neither a near shore nor a far shore, cruel and yet realistic.

But we still need to bravely face the truth of the modern world. As Balaji said, AGI represents the return of God. This God is omnipotent and omniscient, but for now, it is still controlled by humans. This concept reflects the concession of religion in the modern world. "Creating gods" truly shapes science as a religion, and everyone is a believer in AI. AI scientists are afraid because AGI may truly represent the return of God.

Image

With Sam Altman joining Microsoft and the dust settling at OpenAI, it may be difficult to stir up trouble again. In the Twitter Space analysis after the OpenAI coup, Balaji said, "Ilya, Adam, and Emmett are all genius technical experts, but their moral views are different from Sam and Greg. This is not a financial, technical, or strategic disagreement. This is a moral disagreement." At this moment, Balaji's rhetoric also tends to be more moderate. His discourse has become less narrative and more objective. He claims that although he does not know Ilya, he knows Adam D'Angelo, "He has always been a very upright person. From various signs, Emmett Shear is also. Therefore, now every AI company needs to choose its own beliefs. Effective altruism or effective accelerationism?"

Web3 also has to accept a truth: AI scientists have not accepted the invitation of Cardano's open-source large model, the decentralized AI narrative. They have not bought into Web3, and whether they go to Microsoft or return to OpenAI, they still embrace traditional AI technology companies.

References

  1. Silicon Valley 101: "Unveiling the Growth History of OpenAI: Idealism, Conflict, Choices, and Power Struggles," April 2023;
  2. https://effectiveacceleration.tech/
  3. Wen Li Liang Kai Hua: "The New Totem of Silicon Valley: Can 'Effective Accelerationism' (e/acc) Raise This Banner?" August 2023;
  4. Wu Guanjun: "Discursive Politics and Monster Politics - Rethinking Political Philosophy through Popular Culture," Exploration and Contention, 2018, Issue 3;
  5. Alex Williams and Nick Srnicek, translated by Lan Jiang: "The Accelerationist Manifesto: Overcoming the Suppression of Technology by Capitalism," March 2018.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX:注册返20%
链接:https://www.okx.com/zh-hans/join/aicoin20
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink