Original Title: "d/acc: one year later"
Author: Vitalik Buterin, Founder of Ethereum
Translation by: Leek, Foresight News
The article revolves around the concept of decentralized acceleration (d/acc), exploring its application in technological development and the challenges it faces, including AI safety and regulation, its connection to cryptocurrencies, and public goods funding. It emphasizes the importance of d/acc in building a safer and better world, as well as the opportunities and challenges for future development. The author elaborates on the connotation of d/acc, analyzing its role in addressing AI risks through comparisons of different strategies, while also discussing the value of cryptocurrencies and the exploration of public goods funding mechanisms. Finally, it looks forward to the future, acknowledging the challenges of technological development but asserting that humanity still has the opportunity to build a better world with existing tools and ideas.
Introduction
Special thanks to volunteers such as Liraz Siri, Janine Leger, and Balvi for their feedback and review.
About a year ago, I wrote an article on technological optimism, expressing my overall enthusiasm for technology and the immense benefits it can bring, while also conveying my cautious attitude towards certain specific issues, primarily focusing on superintelligent AI and the potential catastrophic risks that could arise if this technology is not constructed properly, or the risk of humanity losing power irreversibly.
One of the core points in that article was to uphold a principle: decentralized, democratic, and differentiated defensive acceleration. We need to accelerate technological development while distinctly focusing on those technologies that enhance our defensive capabilities rather than those that could cause harm, and we must strive to promote the decentralization of power rather than allowing it to concentrate in the hands of a few elites, avoiding a situation where these elites determine right and wrong on behalf of everyone. The model of defense should resemble the democratic Switzerland and the historically quasi-anarchist Zomia region, rather than the model represented by lords and castles under medieval feudalism.
In the year since then, these ideas and concepts have undergone significant development and maturation. I shared these thoughts on the "80,000 Hours" platform (note: an organization focused on career choices) and received numerous responses, most of which were positive, though there were certainly some critical voices.
The work itself has continued to advance and achieve tangible results: we have witnessed progress in the field of verifiable open-source vaccines; the understanding of the value of healthy indoor air has deepened; "community notes" continue to play a positive role; prediction markets have had a breakthrough year as an information tool; zero-knowledge succinct non-interactive arguments of knowledge (ZK - SNARKs) have been applied in government identity verification and social media (ensuring the security of Ethereum wallets through account abstraction); open-source imaging tools have found applications in medicine and brain-computer interfaces (BCI), and so on.
Last autumn, we held the first significant d/acc event: the "d/acc Discovery Day" (d/aDDy) at Devcon, which brought together speakers from various pillar fields of d/acc (biology, physics, cybersecurity, information defense, and neurotechnology) for a full day. People who have dedicated years to these technologies have increasingly understood each other's work, while outsiders have become more aware of this broader vision: the same values that drive the development of Ethereum and cryptocurrencies can extend to a wider world.
The Connotation and Extension of d/acc
Time flows to the year 2042. You see a report in the media stating that a new epidemic may break out in your city. You have become accustomed to such news: people often overreact to every mutation of animal diseases, and in most cases, these mutations do not lead to an actual crisis. The previous two potential epidemics were detected early through wastewater monitoring and open-source analysis of social media, and were successfully contained at their nascent stage. However, this time is different; prediction markets indicate a 60% probability of at least 10,000 cases, which worries you.
Just yesterday, the genetic sequence of the virus was identified. An update for the software of the air testing device in your pocket was released, enabling it to detect the new virus (whether through a single breath test or after being exposed to indoor air for 15 minutes). Meanwhile, open-source instructions and code for generating vaccines using equipment available at any modern medical facility are expected to be released within weeks. Most people have not yet taken any action; they primarily rely on widely adopted air filtration and ventilation measures to protect themselves.
Due to your own immune issues, you act more cautiously: the open-source locally running personal assistant AI you use, in addition to handling routine tasks like navigation, restaurant, and event recommendations, also takes into account real-time air testing data and carbon dioxide data, recommending only the safest places to you. This data is provided by thousands of participants and devices, and with the help of ZK - SNARKs and differential privacy technology, the risk of data being leaked or misused for other purposes is minimized (if you intend to contribute data to these datasets, other personal assistant AIs will verify whether these encryption tools are indeed effective).
Two months later, the epidemic miraculously dissipated: it seems that 60% of people followed basic preventive protocols, wearing masks when the air tester alarmed and indicated the presence of the virus, and isolating at home if their personal test results were positive. This measure was sufficient to further reduce the transmission rate, which had already been significantly lowered due to passive strong air filtration, to below 1. A disease that simulation results indicated could be five times more severe than the COVID-19 pandemic two decades ago has not caused serious impact.
d/acc Day at Devcon
The d/acc event at Devcon achieved a very positive outcome: the d/acc concept successfully brought together people from different fields and genuinely sparked their interest in each other's work.
Holding an event with "diversity" is not difficult, but enabling people from different backgrounds and interests to truly establish close connections is quite challenging. I still vividly remember my experiences in middle and high school, being forced to watch lengthy operas, which I personally found tedious. I knew I "should" appreciate them, as failing to do so would label me as an uncultured computer science slacker, but I could not resonate with the content of the operas on a deeper level. However, the atmosphere of d/acc Day was entirely different: it felt like people genuinely loved learning about various works from different fields.
If we aspire to build a future that is brighter than domination, deceleration, and destruction, we must engage in this kind of broad alliance building. d/acc seems to have achieved significant success in this regard, and this alone is enough to highlight the precious value of this concept.
The core idea of d/acc is simple and clear: decentralized, democratic, and differentiated defensive acceleration. It aims to build technologies that tilt the balance of offense and defense towards defense, without relying on transferring more power to central authorities during implementation. There is an inherent close connection between these two aspects: any decentralized, democratic, or free political structure tends to thrive when defense is easy to implement, while it faces severe challenges when defense is fraught with difficulties—in those cases, the more likely outcome is a chaotic period of mutual opposition among all, ultimately leading to a balance dominated by the strongest.
One way to understand the significance of attempting to achieve decentralization, defensiveness, and acceleration simultaneously is to compare it with the ideas generated by abandoning any one of these three aspects.
Last year's chart from "My Technological Optimism"
Decentralized acceleration, but neglecting the "differentiated defense" aspect
Essentially, this is akin to being an effective accelerationist (e/acc) while simultaneously pursuing decentralization. Many people adopt this approach, some of whom self-identify as d/acc, but they beneficially describe their focus as "offensive." Additionally, many others express a more moderate enthusiasm for "decentralized AI" and similar topics, but in my view, their attention to the "defense" aspect is clearly insufficient.
In my opinion, this approach may mitigate the risk of a specific group imposing a dictatorship over humanity, but it fails to address potential structural issues: in an environment conducive to offense, there is always a persistent risk of disaster, or someone may position themselves as a protector and permanently occupy a dominant position. Regarding AI, it also cannot adequately address the risk of humanity's overall power being diminished relative to AI.
Differentiated defensive acceleration, but ignoring "decentralization and democracy"
Accepting centralized control to achieve security goals has always held a certain appeal for some, and readers are undoubtedly familiar with many such examples and the drawbacks they bring. Recently, some have expressed concerns that extreme centralized control may be the only way to respond to future extreme technologies: for instance, envisioning a hypothetical scenario where "everyone wears a 'freedom tag'—a follow-up product to today's more limited wearable monitoring devices, similar to ankle tags used as alternatives to imprisonment in several countries… encrypted video and audio are continuously uploaded and interpreted in real-time by machines." However, centralized control has a degree of problems. A relatively mild form of centralized control that is often overlooked but still harmful is reflected in the resistance to public oversight in the biotechnology field (e.g., food, vaccines) and allowing such resistance to go unchallenged through closed-source norms.
The risks of this approach are evident: the center itself often becomes the source of risk. We witnessed this during the COVID-19 pandemic, where functionally gain-of-function research funded by several major world governments may have been the source of the pandemic, centralized epistemology led the World Health Organization to refuse for years to acknowledge that the COVID virus was airborne, and mandatory social distancing and vaccine mandates triggered political backlash that could last for decades. Similar situations are likely to recur in any risk scenario related to AI or other risky technologies. In contrast, a decentralized approach would more effectively address risks arising from the center itself.
Decentralized defense, but excluding acceleration
Essentially, this is an attempt to slow technological progress or promote economic decline.
This strategy faces dual challenges. First, overall, technological and economic growth is extremely beneficial for humanity, and any delay in this growth can incur immeasurable costs. Second, in a non-totalitarian world, stagnation is unstable: those who "cheat" the most and can find seemingly reasonable ways to continue advancing will gain an advantage. Deceleration strategies can work to some extent in certain specific contexts: for example, European food is healthier than American food, which serves as an illustration; the success of nuclear non-proliferation so far is another. However, these strategies cannot work indefinitely.
Through d/acc, we are committed to achieving the following goals:
- Upholding principles in today's increasingly tribalized world, rather than blindly constructing various things—instead, we aim to build specific things that make the world safer and better.
- Recognizing that exponential technological advancement means the world will become extremely peculiar, and humanity's overall "footprint" in the universe will inevitably continue to grow. Our ability to protect vulnerable animals, plants, and populations from harm must continuously improve, and the only way forward is to move boldly ahead.
- Building technologies that can genuinely protect us, rather than relying on the assumption that "good people (or good AIs) will control everything." We achieve this by creating tools that are naturally more effective for building and protecting than for destruction.
Another perspective on thinking about d/acc is to return to a framework from the late 2000s European Pirate Party movement: empowerment.
Our goal is to create a world that retains human agency, achieving negative freedom, which avoids others (whether ordinary citizens, governments, or superintelligent robots) from actively intervening in our ability to shape our own destinies, while also achieving positive freedom, ensuring we have the knowledge and resources to exercise that ability. This echoes a classical liberal tradition that has persisted for centuries, encompassing Stewart Brand's focus on "tool access" and John Stuart Mill's emphasis on education and freedom as key elements of human progress—perhaps supplemented by Buckminster Fuller's vision that the global problem-solving process should be participatory and widely distributed. Given the technological landscape of the 21st century, we can view d/acc as a means to achieve these same goals.
Third Dimension: Synergistic Development of Survival and Prosperity
In my article last year, d/acc particularly focused on defensive technologies: physical defense, biological defense, cybersecurity, and information defense. However, mere decentralized defense is insufficient to build a great world: we also need a forward-looking positive vision that clearly outlines what humanity can achieve after gaining new decentralization and security.
Last year's article indeed contained a positive vision in two aspects:
- When addressing the challenges of superintelligence, I proposed a pathway (which is not original to me) for how we can achieve superintelligence without losing power:
- Currently, building AI as a tool rather than a highly autonomous agent.
- In the future, using tools like virtual reality, electromyography, and brain-computer interfaces to establish closer feedback mechanisms between AI and humans.
- Over time, gradually moving towards the ultimate outcome, where superintelligence is a product of the close integration of machines and humans.
- When discussing information defense, I also mentioned that, in addition to defensive social technologies aimed at helping communities maintain cohesion and engage in high-quality discussions in the face of attackers, there are progressive social technologies that can assist communities in making high-quality judgments more easily: Pol.is is one example, as are prediction markets.
However, at the time, these two points felt disconnected from the core argument of d/acc: "Here are some ideas about building a more democratic and defense-friendly world at a foundational level, and by the way, here are some unrelated thoughts on how we achieve superintelligence."
Nevertheless, I believe that in reality, there are some crucial connections between the d/acc technologies labeled as "defensive" and "progressive." Let us expand the d/acc chart from last year's article by adding this axis (while renaming it "Survival and Prosperity") to see what results it presents:
There is a consistent pattern across various fields, where the sciences, ideas, and tools that help us "survive" in a certain domain are closely related to those that enable us to "prosper." Here are some specific examples:
- Many recent studies on COVID-19 focus on the persistent presence of the virus in the body, which is seen as a key mechanism for long COVID. Recently, there have also been indications that the persistent presence of the virus may be a pathogenic factor for Alzheimer's disease—if this view holds, then addressing the issue of viral persistence across all types of tissues may become key to tackling the problem of aging.
- Low-cost and micro-imaging tools, such as those being developed by Openwater, have tremendous potential in treating microthrombi, viral persistence, cancer, and can also be applied in the field of brain-computer interfaces.
- The ideas driving the development of social tools suitable for highly adversarial environments (like community notes) and those for reasonably cooperative environments (like Pol.is) are very similar.
- Prediction markets hold significant value in both high-cooperation and high-adversarial environments.
- Zero-knowledge proofs and similar technologies compute data while protecting privacy, increasing the amount of data available for beneficial work like scientific research, while also enhancing privacy protection.
- Solar energy and batteries are crucial for driving the next wave of clean economic growth, while also excelling in decentralization and physical resilience.
Moreover, there are important interdependencies between different disciplines:
- Brain-computer interfaces are vital as both information defense and collaboration technologies because they enable more nuanced communication of our thoughts and intentions. Brain-computer interfaces are not just connections between robots and consciousness: they can also facilitate interactions between consciousness - robots - consciousness. This resonates with the value of brain-computer interfaces in the concept of diversity.
- Many biotechnologies rely on information sharing, and in many cases, people are only willing to share information when they are confident it will only be used for specific applications. This relies on privacy technologies (like zero-knowledge proofs, fully homomorphic encryption, obfuscation techniques, etc.).
- Collaborative technologies can be used to coordinate funding for any other technological domain.
The Challenge: AI Safety, Urgent Timelines, and Regulatory Dilemmas
Different people have vastly different timelines for AI. The chart is from Zuzalu in Montenegro, 2023.
The most compelling counterargument I received in response to my article last year came from the AI safety community. Their argument was: "Of course, if we had half a century to develop strong AI, we could focus on building all these beneficial things. But in reality, it seems we may only have three years to develop general AI, and then three more years to develop superintelligence. Therefore, if we do not want to plunge the world into destruction or otherwise fall into an irreversible predicament, we cannot merely accelerate the development of beneficial technologies; we must also slow down the development of harmful technologies, which means we need strong regulatory measures that may anger the powerful." In my article last year, aside from vaguely calling for not building risky forms of superintelligence, I did not propose any specific strategies for "slowing down the development of harmful technologies." So here, it is necessary to directly address this issue: if we find ourselves in the least ideal world, where AI risks are extremely high and the timeline may be as short as five years, what regulatory measures would I support?
Reasons for Caution Towards New Regulations
Last year, the main AI regulatory proposal was California's SB - 1047 bill. SB - 1047 requires developers of the most powerful models (those with training costs exceeding $100 million or fine-tuning costs exceeding $10 million) to undertake a series of safety testing measures before release. Additionally, if AI model developers fail to exercise sufficient caution, they will be held accountable. Many critics argue that the bill "poses a threat to open source"; I disagree with this, as the cost threshold means it only affects the most powerful models: even the Llama3 model may fall below this threshold. However, looking back, I believe the bill has a more serious problem: like most regulatory measures, it overfits the current situation. The focus on training costs has proven to be fragile in the face of new technologies: the training cost of the recently advanced DeepSeek v3 model was only $6 million, and in new models like o1, costs often shift from training to the inference stage.
Actors Most Likely Responsible for AI Superintelligence Catastrophe Scenarios
In reality, the actors most likely responsible for AI superintelligence catastrophe scenarios are the military. As we have witnessed in the past half-century of biosafety (and earlier), the military is willing to take some terrifying actions, and they are prone to making mistakes. Today, the application of AI in military contexts is rapidly advancing (as seen in Ukraine and Gaza). Moreover, any safety regulatory measures passed by governments will, by default, exempt their own military and companies closely cooperating with the military.
Response Strategies
Nevertheless, these arguments do not provide us with a reason to be helpless. On the contrary, we can use them as guidance to attempt to formulate rules that raise the least concerns.
Strategy 1: Accountability
If someone's actions cause legally actionable harm in some way, they may be sued. This does not address the risk posed by the military and other "above the law" actors, but it is a very general approach that avoids overfitting, which is why libertarian-leaning economists typically support this practice.
The main accountability targets considered so far are as follows:
- Users: those who use AI.
- Deployers: intermediaries providing AI services to users.
- Developers: those who build AI.
Attributing responsibility to users seems to align best with incentive mechanisms. While the connection between how models are developed and how they are ultimately used is often unclear, users determine the specific ways in which AI is utilized. Holding users accountable creates a strong pressure to use AI in ways that I believe are correct: focusing on building mechanical suits for human thought rather than creating new self-sustaining forms of intelligent life. The former will regularly respond to user intentions, thus not leading to catastrophic actions unless the user desires it. The latter, however, carries the greatest risk of potentially going out of control and triggering classic "AI runaway" scenarios. Another benefit of placing responsibility as close to the end-use as possible is that it minimizes the risk of accountability leading people to take actions that are harmful in other respects (e.g., closed-source, Know Your Customer (KYC) regulations, and surveillance, collusion between states/corporations to secretly restrict users, such as banks refusing to serve certain clients, excluding large areas of the world).
A classic counterargument to solely attributing responsibility to users is that users may be ordinary individuals with little money, or they may even be anonymous, making it impossible for anyone to actually pay for catastrophic damages. This viewpoint may be exaggerated: even if some users are too small to bear responsibility, the ordinary customers of AI developers are not, so AI developers will still be incentivized to build products that assure users they will not face high liability risks. That said, this is still a valid point that needs to be addressed. You need to incentivize someone with resources in the pipeline to take appropriate precautions, and deployers and developers are easily identifiable targets who still have significant influence over the safety of the models.
Attributing responsibility to deployers seems reasonable. A common concern is that it does not work for open-source models, but this seems manageable, especially since the most powerful models are likely to be closed-source (if the outcome is open-source, then while deployer responsibility may ultimately not be very useful, it will not cause too much harm). Developer responsibility also faces the same concerns (although there are certain barriers to fine-tuning models to perform actions that are otherwise not permitted for open-source models), but the same rebuttal applies. As a general principle, imposing a "tax" on control essentially says, "You can build things you cannot control, or you can build things you can control, but if you build things you can control, then 20% of the control must be used for our purposes," which seems to be a reasonable stance for the legal system.
One idea that seems not to have been fully explored is to attribute responsibility to other actors in the pipeline who are more likely to have sufficient resources. A very d/acc-aligned idea is to hold accountable the owners or operators of any devices that AI takes over (e.g., through hacking) during the execution of certain catastrophic harmful actions. This would create a very broad incentive for people to strive to make the infrastructure of the world (especially in computing and biology) as safe as possible.
Strategy 2: A Global "Soft Pause" Button on Industrial-Scale Hardware
If I were convinced that we needed measures more "powerful" than liability rules, I would choose this strategy. The goal is to have the ability to reduce global available computing power by about 90% - 99% during critical periods, lasting 1 - 2 years, to buy humanity more preparation time. The value of 1 - 2 years should not be overstated: a year of "wartime mode" can easily equate to a hundred years of regular work in times of complacency. Methods to achieve a "pause" are already being explored, including some specific proposals such as requiring hardware registration and verification of location.
A more advanced approach is to use clever cryptographic techniques: for example, industrial-scale (but not consumer-grade) AI hardware produced could be equipped with a trusted hardware chip that only allows it to continue operating if it receives a 3/3 signature from major international institutions (including at least one non-military-affiliated organization) each week. These signatures would be device-independent (if necessary, we could even require zero-knowledge proofs to be published on the blockchain), so it would be all or nothing: there would be no practical way to authorize one device to continue operating without authorizing all other devices.
This seems to "meet the requirements" in maximizing benefits and minimizing risks:
- This is a useful capability: if we receive indications that AI approaching superintelligence begins to do things that could lead to catastrophic harm, we would want to transition more slowly.
- Before such critical moments arrive, merely having the ability to soft-pause poses little harm to developers.
- Focusing on industrial-scale hardware and setting the target at 90% - 99% will avoid some dystopian practices, such as implanting spy chips in consumer-grade laptops or mandatory shutdown switches, or forcing small countries to take harsh measures against their will.
- Focusing on hardware seems to be highly adaptable to technological changes. We have already seen across multiple generations of AI that quality largely depends on available computing power, especially in the early versions of new paradigms. Therefore, reducing available computing power by 10 - 100 times could easily create a decisive advantage in a rapid battle between out-of-control superintelligent AI and humans trying to stop it.
- The inherent inconvenience of needing to go online weekly to obtain signatures would strongly deter the idea of extending this scheme to consumer-grade hardware.
- Verification can be conducted through random checks, and operating at the hardware level would make it difficult to exempt specific users (the method of legal shutdown rather than technical means does not possess this all-or-nothing attribute, making it easier to slide into exemptions for the military, etc.).
Hardware regulation is already being strongly considered, although typically within the framework of export controls, which essentially embodies a "we trust our side but not the other side" mentality. Leopold Aschenbrenner famously argued that the U.S. should compete for decisive advantages and then force China to sign an agreement limiting the number of devices they can operate. In my view, this approach seems risky and may combine the flaws of multipolar competition and centralization. If we must limit people, it seems better to impose equal restrictions on everyone and strive for actual cooperation to organize implementation, rather than one side attempting to dominate all.
The Role of d/acc Technologies in AI Risks
Both strategies (liability and hardware pause button) have vulnerabilities, and it is clear that they are merely temporary stopgaps: if something can be done on a supercomputer at time T, it is likely to be possible on a laptop at time T + 5 years as well. Therefore, we need more stable measures to buy time. Many d/acc technologies are relevant here. We can view the role of d/acc technologies as follows: if AI were to take over the world, how would it do so?
- It hacks into our computers → cybersecurity
- It creates super plagues → biological defense
- It persuades us (to either trust it or distrust each other) → information defense
As briefly mentioned above, liability rules are a naturally d/acc-aligned regulatory approach because they can effectively incentivize the adoption of these defensive measures worldwide and take them seriously. Taiwan has recently been experimenting with holding parties accountable for false advertising, which can be seen as an example of using liability to encourage information defense. We should not be overly eager to impose liability everywhere and must remember the benefits of ordinary freedoms that enable small players to innovate without fear of litigation, but where we genuinely want to push for stronger safety, liability can be quite flexible and effective.
The Role of Cryptocurrency in d/acc
Many aspects of d/acc extend far beyond typical blockchain themes: biosafety, brain-computer interfaces, and collaborative discourse tools seem quite distant from what cryptocurrency enthusiasts usually discuss. However, I believe there are some important connections between cryptocurrency and d/acc, particularly:
- d/acc is an extension of the fundamental values of cryptocurrency (decentralization, anti-censorship, open global economy and society) to other technological domains.
- Because cryptocurrency users are natural early adopters and there is a consistency of values, the cryptocurrency community is a natural early user of d/acc technologies. The high emphasis on community (both online and offline, such as events and flash mobs), along with the fact that these communities are actually doing high-risk things rather than just talking to each other, makes the cryptocurrency community a particularly attractive incubator and testing ground for d/acc technologies, which fundamentally operate based on collectives rather than individuals (e.g., most information defense and biological defense technologies). Cryptocurrency enthusiasts are about doing things together.
- Many cryptocurrency technologies can be applied to d/acc thematic areas: blockchain for building more robust and decentralized financial, governance, and social media infrastructures, zero-knowledge proofs for protecting privacy, etc. Today, many of the largest prediction markets are built on blockchain and are gradually becoming more complex, decentralized, and democratic.
- There are also win-win collaboration opportunities in adjacent technological areas that are very useful for cryptocurrency projects while being key to achieving d/acc goals: formal verification, computer software and hardware security, and robust governance technologies with resilience. These enhance the security and robustness of Ethereum blockchain, wallets, and decentralized autonomous organizations (DAOs), and they also achieve important civil defense goals, such as reducing our vulnerability to cyberattacks (including those potentially from superintelligent AI).
Cursive is an application that uses fully homomorphic encryption (FHE) technology, allowing users to identify areas of mutual interest with other users while protecting privacy. Edge City in Chiang Mai (one of the many branches of Zuzalu) has utilized this application.
d/acc and Public Goods Funding
One question I have been interested in is how to come up with better mechanisms to fund public goods: projects that are valuable to very large groups but lack naturally accessible business models. My past work in this area includes my contributions to quadratic funding and its applications in Gitcoin grants, retroactive public goods funding (retro PGF), and more recently, deep funding.
Many people are skeptical about the concept of public goods. This skepticism often arises from two sources:
- Public goods have historically been used as a justification for governments to impose harsh central planning and intervention on society and the economy.
- A common perception is that public goods funding lacks rigor and operates based on social expectation biases—i.e., things that sound good rather than truly good—and favors insiders who can navigate social games.
These are important criticisms and reasonable ones. However, I believe that strong decentralized public goods funding is crucial for the d/acc vision, as a key goal of d/acc (minimizing central control points) inherently obstructs many traditional business models. Building successful enterprises on open-source foundations is possible—several Balvi grantees are doing so—but in some cases, it is difficult enough that important projects require additional ongoing support. Therefore, we must do the hard work of figuring out how to fund public goods in a way that addresses the two criticisms mentioned above.
The solution to the first problem is essentially credible neutrality and decentralization. Central planning is problematic because it hands over control to elites who may become abusive of power, and because it often over-adapts to current situations, becoming increasingly ineffective over time. Quadratic funding and similar mechanisms are precisely about funding public goods in a way that is as credibly neutral and (architecturally and politically) decentralized as possible.
The second problem is more challenging. A common criticism of quadratic funding is that it quickly turns into a popularity contest, requiring project funders to spend a lot of effort on public relations. Moreover, projects that are "in front of people" (such as end-user applications) receive funding, while those that are more behind the scenes (typical "dependencies maintained by someone in Nebraska") receive none at all. Optimistic retroactive funding relies on a smaller number of expert badge holders; here, the popularity contest effect is reduced, but the social effect of having close personal relationships with badge holders is amplified.
Deep funding is my latest effort to address this issue. Deep funding has two main innovations:
- Dependency graph. We do not ask each juror a global question ("What is the value of project A to humanity?"), but rather a local question ("Which is more valuable for outcome C, project A or project B? By how much?"). Humans are notoriously bad at answering global questions: in a famous study, when asked how much they would be willing to pay to save N birds, respondents gave roughly the same answer of $80 for N = 2000, N = 20000, and N = 200000. Local questions are easier to handle. We then combine local answers into a global answer by maintaining a "dependency graph": for each project, which other projects contribute to its success, and by how much?
- AI as refined human judgment. Each juror is assigned only a small random sample of all the questions. There is an open competition where anyone can submit AI models attempting to effectively fill in all the edges in the graph. The final answer is a weighted sum of the models that are most compatible with the jury's answers. For code examples, see here. This approach allows the mechanism to scale to very large sizes while requiring jurors to submit only a small number of "information bits." This reduces opportunities for corruption and ensures that each information bit is of high quality: jurors can take their time to think about each question rather than quickly clicking through hundreds of questions. By using an open competition for AI, we reduce biases from any single AI training and management process. The open market for AI serves as the engine, with humans as the steering wheel.
But deep funding is just the latest example; there have been other ideas for public goods funding mechanisms before, and there will be more in the future. allo.expert has done a great job cataloging them. The fundamental goal is to create a social tool capable of funding public goods with at least the accuracy, fairness, and open access levels that come close to market funding for private goods. It does not have to be perfect; after all, the market itself is far from perfect. But it should be effective enough that developers engaged in high-quality open-source projects that benefit everyone can continue to do so without feeling the need to make unacceptable compromises.
Today, most leading projects in d/acc thematic areas—vaccines, brain-computer interfaces, "edge brain-computer interfaces" like wrist EMG and eye tracking, anti-aging drugs, hardware, etc.—are proprietary projects. This has significant downsides in ensuring public trust, as we have already seen in many of the aforementioned areas. It also shifts attention to competitive dynamics ("Our team must win this critical industry!"), rather than ensuring that these technologies emerge quickly enough to protect us in a world dominated by superintelligent AI. For these reasons, strong public goods funding can be a powerful driver of openness and freedom. This is another way the cryptocurrency community can help d/acc: by seriously exploring these funding mechanisms and making them work well in their own context, preparing for broader applications in open-source science and technology.
The Future
The coming decades present significant challenges. Recently, I have been reflecting on two challenges:
- A powerful new wave of technology, especially strong AI, is rapidly approaching, accompanied by important traps we need to avoid. "Artificial superintelligence" could arrive in five years or it could take fifty years. In either case, it is unclear whether the default outcome will be automatically positive, as there are multiple traps to avoid, as described in this article and the previous one.
- The world is becoming increasingly uncooperative. Many powerful actors who previously seemed to act based on noble principles (universalism, freedom, shared humanity, etc.) are now more openly and actively pursuing their own personal or tribal interests.
However, each of these challenges holds a glimmer of hope. First, we now have very powerful tools to complete our remaining work more quickly:
- Current and near-future AI can be used to build other technologies and can serve as a governance element (as in deep funding or information finance). It is also highly relevant to brain-computer interfaces, which can themselves provide further productivity enhancements.
- Large-scale coordination is now possible on a larger scale than ever before. The internet and social media have expanded the scope of coordination, global finance (including cryptocurrency) has enhanced its power, and now information defense and collaboration tools can improve its quality; perhaps soon, brain-computer interfaces in a human-computer-human format can deepen it.
- Formal verification, sandbox technologies (web browsers, Docker, Qubes, GrapheneOS, etc.), secure hardware modules, and other technologies are improving, making better cybersecurity possible.
- Writing any type of software is much easier than it was two years ago.
- Recent foundational research on understanding how viruses work, particularly the simple understanding that the most important mode of transmission is airborne, has shown a clearer path for improving biological defenses.
- Recent advances in biotechnology (e.g., CRISPR, advancements in biological imaging) have made various biotechnologies—whether for defense, longevity, super happiness, exploring multiple new biological hypotheses, or simply doing very cool things—more accessible.
- The joint progress in computing and biotechnology is making synthetic biological tools possible, which you can use to adapt, monitor, and improve your health. Cyber defense technologies, such as cryptography, make this personalized dimension more feasible.
Secondly, many of the principles we cherish are no longer monopolized by specific parts of the old powers; they can be reclaimed by a broad coalition that welcomes anyone from around the world. This may be the greatest benefit of the recent political "realignments" around the world, and it is worth leveraging. Cryptocurrency has excelled at capitalizing on this and finding global appeal; d/acc can do the same.
Acquiring tools means we can adapt and improve our biological characteristics and environment, while the "defense" aspect of d/acc means we can do so without infringing on others' freedom to do the same. The principle of liberal pluralism means we can have great diversity in how we achieve this, while our commitment to shared human goals means it should be realized.
We humans remain the brightest stars. The task before us—to build a brighter 21st century while protecting human survival, freedom, and agency as we reach for the stars—is a challenging one. But I believe we are up to the task.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。