Dialogue with Vitalik: The Innovative Integration of POS, L2, Ethereum, and AI

CN
PANews
Follow
22 hours ago

Original Title: [DappLearning] Interview with Vitalik Buterin in Chinese

Original Author: DappLearning

On April 7, 2025, at the Pop-X HK Research House event co-hosted by DappLearning, ETHDimsum, Panta Rhei, and UETH, Vitalik and Xiao Wei appeared at the event.

During a break in the event, Yan, the founder of the DappLearning community, interviewed Vitalik. The interview covered multiple topics including ETH POS, Layer 2, cryptography, and AI. The conversation was in Chinese, and Vitalik's Chinese was very fluent.

Here is the content of the interview (for easier reading and understanding, the original content has been slightly edited):

01 Views on POS Upgrade

Yan:

Hello, Vitalik, I am Yan from the DappLearning community. It is a great honor to interview you here.

I started learning about Ethereum in 2017. I remember that in 2018 and 2019, there was a heated discussion about POW and POS, and this topic may continue to be discussed.

Looking at it now, (ETH) POS has been running stably for over four years, with millions of Validators in the consensus network. However, at the same time, the exchange rate of ETH to BTC has been declining, which has both positive aspects and some challenges.

So, from this point in time, what do you think about Ethereum's POS upgrade?

Vitalik:

I think the prices of BTC and ETH have nothing to do with POW and POS.

There are many different voices in the BTC and ETH communities, and what these two communities are doing is completely different, as are their ways of thinking.

Regarding the price of ETH, I think there is a problem: ETH has many possible futures, and (one can imagine) that in these futures, there will be many successful applications on Ethereum, but these successful applications may not bring enough value to ETH.

This is a concern for many people in the community, but it is actually a normal issue. For example, Google has many products and does many interesting things. However, over 90% of their revenue is still related to their Search business.

The relationship between Ethereum's ecosystem applications and ETH (price) is similar. Some applications pay a lot of transaction fees and consume a lot of ETH, while there are many (applications) that may be relatively successful, but they do not correspondingly bring that much success to ETH.

So this is something we need to think about and continue to optimize. We need to support more applications that have long-term value for Ethereum holders and ETH.

Therefore, I think the future success of ETH may appear in these areas. I don't think it has much relevance to improvements in consensus algorithms.

02 Concerns about PBS Architecture and Centralization

Yan:

Yes, the prosperity of the ETH ecosystem is also an important reason that attracts us developers to build it.

Okay, what do you think about the PBS (Proposer & Builder Separation) architecture of ETH2.0? This is a good direction; in the future, everyone can use a mobile phone as a light node to verify (ZK) proofs, and anyone can stake 1 ether to become a Validator.

However, Builders may become more centralized, as they need to do anti-MEV and generate ZK Proofs. If Based rollups are adopted, then Builders may have even more responsibilities, such as acting as Sequencers.

In this case, will Builders become too centralized? Although Validators are already sufficiently decentralized, this is a chain. If one link in the chain has a problem, it will also affect the operation of the entire system. So, how do we solve the censorship resistance issue in this area?

Vitalik:

Yes, I think this is a very important philosophical question.

In the early days of Bitcoin and Ethereum, there was a subconscious assumption:

Building a block and validating a block are one operation.

Assuming you are building a block, if your block contains 100 transactions, then your own node needs to process that many (100 transactions) gas. When you finish building the block and broadcast it to the world, every node in the world also needs to do that much work (consume the same gas). So if we set the gas limit to allow every laptop or MacBook, or some server of a certain size to build blocks, then we need appropriately configured node servers to validate these blocks.

This was the previous technology. Now we have ZK, DAS, many new technologies, and Statelessness.

Before using these technologies, building a block and validating a block needed to be symmetrical, but now it can become asymmetrical. So the difficulty of building a block may become very high, but the difficulty of validating a block may become very low.

Using a stateless client as an example: If we use this stateless technology and increase the gas limit tenfold, the computational requirements for building a block will become enormous, and an ordinary computer may no longer be able to do it. At this point, we may need to use a particularly high-performance MAC studio or a more powerful server.

But the cost of validation will become lower, because validation requires no storage at all, relying only on bandwidth and CPU computing resources. If we add ZK technology, the CPU cost of validation can also be eliminated. If we add DAS, the cost of validation will be extremely low. If the cost of building a block becomes higher, the cost of validation will become very low.

So, is this better compared to the current situation?

This question is quite complex. I think about it this way: if there are some super nodes in the Ethereum network, that is, some nodes have higher computational power, we need them to perform high-performance computing.

How do we prevent them from acting maliciously? For example, there are several types of attacks:

First: Creating a 51% attack.

Second: Censorship attack. If they refuse to accept some users' transactions, how can we reduce this type of risk?

Third: Operations related to anti-MEV. How can we reduce these risks?

Regarding the 51% attack, since the validation process is done by Attesters, those Attester nodes need to validate DAS, ZK Proof, and stateless clients. The cost of this validation will be very low, so the threshold for becoming a consensus node will still be relatively low.

For example, if some Super Nodes build blocks, and in this situation, 90% of those nodes are yours, 5% are someone else's, and 5% are others. If you completely refuse to accept any transactions, it is not particularly bad, why? Because you cannot interfere with the entire consensus process.

So you cannot perform a 51% attack; the only thing you can do is to refuse certain users' transactions.

Users may just need to wait for ten or twenty blocks for another person to include their transaction in a block, and that’s the first point.

The second point is that we have the concept of Fossil. What does Fossil do?

Fossil separates the role of "selecting transactions" from the role of executing transactions. This way, the role of selecting which transactions to include in the next block can be more decentralized, allowing smaller nodes to have the ability to independently choose transactions to include in the next block. Additionally, if you are a large node, your power is actually very limited[1].

This method is more complex than before. Previously, we thought of each node as a personal laptop. But if you look at Bitcoin, it is now also a relatively hybrid architecture. Because Bitcoin miners are all those Mining Data Centers.

So in POS, it is done like this: some nodes require more computational power and resources. However, the rights of these nodes are limited, and other nodes can be very decentralized, ensuring the security and decentralization of the network. But this method is more complex, so this is also one of our challenges.

Yan:

Very good thinking. Centralization is not necessarily a bad thing, as long as we can limit malicious actions.

Vitalik:

Yes.

03 Issues Between Layer 1 and Layer 2, and Future Directions

Yan:

Thank you for answering my long-standing confusion. We come to the second part of the question. As a witness to Ethereum's journey, Layer 2 has actually been very successful. The TPS issue has indeed been resolved. Unlike during the ICO days (when transactions were congested).

I personally think that Layer 2 is quite usable now. However, currently, the issue of liquidity fragmentation for Layer 2 has also led many people to propose various solutions. What do you think about the relationship between Layer 1 and Layer 2? Is the current Ethereum mainnet too laid-back, too decentralized, and does it impose no constraints on Layer 2? Should Layer 1 establish rules with Layer 2, or create some profit-sharing models, or adopt solutions like Based Rollup? Justin Drake recently proposed this solution at Bankless, and I also agree with it. What do you think? I am also curious when the corresponding solutions will go live if they already exist.

Vitalik:

I think there are several issues with our Layer 2 now.

First, their progress in security is not fast enough. So I have been pushing for Layer 2 to upgrade to Stage 1 and hope they can upgrade to Stage 2 this year. I have been urging them to do this and have been supporting L2BEAT to do more transparency work in this area.

Second, there is the issue of L2 interoperability. That is, cross-chain transactions and communication between two L2s. If two L2s are in the same ecosystem, interoperability needs to be simpler, faster, and cheaper than it is now.

Last year, we started this work, now called the Open Intents Framework, along with Chain-specific addresses, which is mostly UX-related work.

In fact, I believe that the cross-chain issue of L2 is probably 80% a UX problem.

Although the process of solving UX issues can be painful, as long as the direction is correct, we can simplify complex problems. This is also the direction we are working towards.

Some things need to go further. For example, the withdrawal time for Optimistic Rollup is one week. If you have a token on Optimism or Arbitrum, transferring that token to L1 or another L2 requires waiting a week.

You can have Market Makers wait a week (and you need to pay them a certain fee). Ordinary users can use methods like the Open Intents Framework Across Protocol to transfer from one L2 to another for smaller transactions, which is feasible. However, for larger transactions, Market Makers still have limited liquidity. Therefore, the transaction fees they require will be relatively high. Last week, I published that article[2], where I support the 2 of 3 validation method, which is the OP + ZK + TEE approach.

Because if we implement the 2 of 3 method, it can simultaneously meet three requirements.

The first requirement is completely trustless, without needing a Security Council; TEE technology plays a supportive role, so it doesn't need to be fully trusted.

Second, we can start using ZK technology, but ZK is still in its early stages, so we cannot fully rely on this technology yet.

Third, we can reduce the withdrawal time from one week to one hour.

You can imagine that if users utilize the Open Intents Framework, the liquidity cost for Market Makers would decrease by 168 times. Because the time they need to wait (to perform rebalancing operations) would drop from one week to one hour. In the long term, we plan to reduce the withdrawal time from one hour to 12 seconds (the current block time), and if we adopt SSF, it can be reduced to 4 seconds.

Currently, we will also use techniques like zk-SNARK Aggregation to process ZK proofs in parallel, reducing latency a bit. Of course, if users do this with ZK, they won't need to go through Intents. However, if they do it through Intents, the cost will be very low, which is part of Interactability.

Regarding the role of L1, in the early stages of the L2 Roadmap, many people thought we could completely replicate Bitcoin's Roadmap, with L1 having very few uses, only doing proofs (performing minimal work), while L2 could handle everything else.

However, we found that if L1 plays no role at all, it is dangerous for ETH.

As we discussed before, one of our biggest concerns is: the success of Ethereum applications cannot translate into the success of ETH.

If ETH is not successful, it will lead to our community lacking funds and being unable to support the next round of applications. Therefore, if L1 plays no role, the user experience and the entire architecture will be controlled by L2 and some applications. There will be no one to represent ETH. So if we can assign more roles to L1 in some applications, it would be better for ETH.

Next, we need to answer the question: What will L1 do? What will L2 do?

In February, I published an article[3] stating that in an L2-centric world, there are many important tasks that L1 needs to perform. For example, L2 needs to send proofs to L1; if an L2 encounters issues, users will need to cross-chain to another L2 via L1. Additionally, Key Store Wallets and Oracle Data can be placed on L1, among other mechanisms that rely on L1.

There are also some high-value applications, such as DeFi, which are actually more suitable for L1. One important reason why some DeFi applications are better suited for L1 is their time horizon; users need to wait a long time, such as one, two, or three years.

This is especially evident in prediction markets, where sometimes questions are asked about what will happen in 2028.

Here lies a problem: if governance in an L2 fails, theoretically, all users there can exit; they can move to L1 or another L2. However, if there is an application in this L2 whose assets are locked in long-term smart contracts, users will have no way to exit. Thus, many theoretically safe DeFi applications are not very safe in practice.

For these reasons, some applications should still be built on L1, so we are starting to pay more attention to L1's scalability.

We now have a roadmap to enhance L1's scalability with about four to five methods by 2026.

The first is Delayed Execution (separating block validation and execution), which means we can validate blocks in each slot and execute them in the next slot. This has the advantage of potentially increasing the maximum acceptable execution time from 200 milliseconds to 3 or 6 seconds, allowing for more processing time[4].

The second is the Block Level Access List, which requires each block to specify which accounts' states and related storage states need to be read. This is somewhat similar to Stateless without Witness, and it has the advantage of allowing us to process EVM execution and IO in parallel, which is a relatively simple implementation method for parallel processing.

The third is Multidimensional Gas Pricing[5], which can set a maximum capacity for a block, which is very important for security.

Another is (EIP4444) historical data processing, which does not require every node to permanently store all information. For example, each node could store only 1%, and we could use a p2p method, where your node might store part of it, and another node stores another part. This way, we can store that information more decentralized.

So if we can combine these four solutions, we believe we can potentially increase L1's gas limit by 10 times, allowing all our applications to start relying more on L1 and doing more on L1, which would benefit L1 and ETH.

Yan:

Okay, the next question, are we likely to welcome the Pectra upgrade this month?

Vitalik:

Actually, we hope to do two things: approximately at the end of this month, we will conduct the Pectra upgrade, and then in Q3 or Q4, we will perform the Fusaka upgrade.

Yan:

Wow, that fast?

Vitalik:

Hopefully.

Yan:

The next question I have is also related to this. As someone who has witnessed Ethereum's growth, we know that to ensure security, there are about five or six clients (consensus clients and execution clients) being developed simultaneously, which involves a lot of coordination work, leading to longer development cycles.

This has its pros and cons; compared to other L1s, it may indeed be slower, but it is also safer.

However, what solutions are there to avoid waiting a year and a half for an upgrade? I have seen you propose some solutions; could you elaborate on them?

Vitalik:

Yes, one solution is that we can improve coordination efficiency. We are now starting to have more people who can move between different teams to ensure more efficient communication between teams.

If a client team has an issue, they can raise it and let the research team know. Actually, one of the advantages of Thomas becoming one of our new EDs is that he is from the client (team), and now he is also in the EF (team). He can facilitate this coordination, which is the first point.

The second point is that we can be stricter with client teams. Our current method is that if there are five teams, we need all five teams to be fully prepared before we announce the next hard fork (network upgrade). We are now considering that we can start the upgrade as long as four teams are ready, so we don't have to wait for the slowest one, which can also motivate everyone more.

04 Views on Cryptography and AI

Yan:

So appropriate competition is necessary. It's great; I really look forward to every upgrade, but let's not keep everyone waiting too long.

Next, I want to ask about cryptography-related questions, which are somewhat scattered.

In 2021, when our community was just established, we gathered developers from major exchanges and researchers from venture capital to discuss DeFi. 2021 was indeed a phase where everyone participated in understanding, learning, and designing DeFi, a nationwide craze.

Looking back at the development, regarding ZK, whether for the public or developers, learning ZK, such as Groth16, Plonk, Halo2, has become increasingly difficult for later developers to catch up, and technological advancements are rapid.

Additionally, we now see a direction where ZKVM is developing quickly, leading to the ZKEVM direction not being as popular as before. When ZKVM matures, developers may not need to focus too much on the ZK underlying technology.

What are your suggestions and views on this?

Vitalik:

I think for some ecosystems of ZK, the best direction is that most ZK developers can know some high-level languages, that is, HLL (High-Level Language). They can write their application code in HLL, while the researchers of proof systems can continue to improve and optimize the underlying algorithms. Developers need to be layered; they do not need to know what happens in the next layer.

Currently, there may be a problem that the ecosystem of Circom and Groth16 is quite developed, but this poses a significant limitation for ZK ecosystem applications. This is because Groth16 has many drawbacks, such as each application needing its own Trusted Setup, and its efficiency is not very high. Therefore, we are also considering allocating more resources here to help some modern HLL achieve success.

Another promising route is ZK RISC-V. Because RISC-V can also become an HLL, many applications, including EVM and some others, can be written on RISC-V[6].

Yan:

Okay, so developers only need to learn Rust, which is great. I attended Devcon in Bangkok last year and was also impressed by the development of applied cryptography.

Regarding applied cryptography, what are your thoughts on the combination of ZKP, MPC, and FHE, and what advice would you give to developers?

Vitalik:

Yes, this is very interesting. I think the prospects for FHE are good now, but there is a concern that MPC and FHE always require a Committee, meaning you need to select, for example, seven or more nodes. If those nodes can be attacked by 51% or 33%, then your system will have problems. It's like the system has a Security Council, which is actually more serious than a Security Council. Because if an L2 is Stage 1, then the Security Council needs 75% of the nodes to be attacked for issues to arise[7], that's the first point.

The second point is that the Security Council, if they are reliable, will mostly put their assets in cold wallets, meaning they will mostly be offline. However, in most MPC and FHE scenarios, their Committee needs to be online continuously for the system to function, so they may be deployed on a VPS or other servers, making it easier to attack them.

This worries me a bit; I think many applications can still be developed, which have advantages but are not perfect.

Yan:

Finally, I want to ask a relatively light question. I see you have been paying attention to AI recently, and I want to list some viewpoints.

For example, Elon Musk said that humans might just be a guiding program for silicon-based civilizations.

Then there is a viewpoint in "The Network State" that centralized countries may prefer AI, while democratic countries prefer blockchain.

From our experience in the crypto space, decentralization presupposes that everyone will follow the rules, will check and balance each other, and will understand the risks, which ultimately leads to elite politics. So what do you think of these viewpoints? Just share your thoughts.

Vitalik:

Yes, I'm thinking about where to start answering.

Because the field of AI is very complex. For example, five years ago, no one would have predicted that the U.S. would have the best closed-source AI in the world, while China would have the best open-source AI. AI can enhance everyone's capabilities, and sometimes it can also increase the power of some centralized (countries).

However, AI can also have a somewhat democratizing effect. When I use AI myself, I find that in areas where I am already among the top thousand in the world, such as in some ZK development fields, AI actually helps me very little in the ZK part; I still need to write most of the code myself. But in areas where I am a novice, AI can help me a lot. For example, in developing Android apps, I had never done it before. I created an app ten years ago using a framework and wrote it in JavaScript, then converted it into an app; apart from that, I had never written a native Android app.

Earlier this year, I conducted an experiment where I wanted to try writing an app using GPT, and it was completed within an hour. This shows that the gap between experts and novices has been significantly reduced with the help of AI, and AI can also provide many new opportunities.

Yan:

To add a point, I really appreciate the new perspective you provided. I previously thought that with AI, experienced programmers would learn faster, while it would be unfriendly to novice programmers. However, in some ways, it indeed enhances the capabilities of novices. It may be a form of equality rather than division, right?

Vitalik:

Yes, but now a very important question that also needs to be considered is what effects the combination of some technologies we are working on, including blockchain, AI, cryptography, and other technologies, will have on society.

Yan:

So you still hope that humanity will not just be under elite rule, right? You also hope to achieve a Pareto optimality for the entire society, where ordinary people become super individuals through the empowerment of AI and blockchain.

Vitalik:

Yes, yes, super individuals, super communities, super humans.

05 Expectations for the Ethereum Ecosystem and Advice for Developers

Yan:

Okay, then we move on to the last question. What are your expectations and messages for the developer community? What would you like to say to the developers in the Ethereum community?

Vitalik:

To these Ethereum application developers, it's time to think.

There are many opportunities to develop applications in Ethereum now, and many things that were previously impossible can now be done.

There are many reasons for this, such as:

First: The previous L1 TPS was completely insufficient, but that problem no longer exists;

Second: The privacy issues that could not be solved before can now be addressed;

Third: Because of AI, the difficulty of developing anything has decreased. It can be said that although the complexity of the Ethereum ecosystem has increased somewhat, AI still allows everyone to better understand Ethereum.

So I believe that many things that failed in the past, including ten years ago or five years ago, may now succeed.

In the current blockchain application ecosystem, I think the biggest problem is that we have two types of applications.

The first type can be described as very open, decentralized, secure, and particularly idealistic (applications). However, they only have 42 users. The second type can be described as casinos. The problem is that these two extremes are both unhealthy.

So what we hope to do is create some applications,

First, that users will enjoy using, which will have real value.

Those applications will be better for the world.

Second, there are really some business models, for example, economically sustainable, that do not need to rely on limited foundation or other organizational funds, which is also a challenge.

But now I think everyone has more resources than before, so if you can find a good idea and execute it well, your chances of success are very high.

Yan:

Looking back, I think Ethereum has been quite successful, continuously leading the industry while striving to solve the problems faced by the industry under the premise of decentralization.

Another point that resonates with me is that our community has always been non-profit, through Gitcoin Grants in the Ethereum ecosystem, as well as OP's retroactive rewards and airdrop rewards from other projects. We have found that building in the Ethereum community can receive a lot of support. We are also thinking about how to ensure the community can operate sustainably and stably.

Building on Ethereum is truly exciting, and we hope to see the true realization of the world computer soon. Thank you for your valuable time.

The interview took place at Mo Sing Leng, Hong Kong

April 7, 2025

Finally, here’s a photo with Vitalik?

Dialogue with Vitalik: POS, L2, and the Fusion Innovation of Ethereum and AI

The references mentioned by Vitalik in the article are summarized as follows:

[1]: https://ethresear.ch/t/fork-choice-enforced-inclusion-lists-focil-a-simple-committee-based-inclusion-list-proposal/19870

[2]: https://ethereum-magicians.org/t/a-simple-l2-security-and-finalization-roadmap/23309

[3]: https://vitalik.eth.limo/general/2025/02/14/l1scaling.html

[4]: https://ethresear.ch/t/delayed-execution-and-skipped-transactions/21677

[5]: https://vitalik.eth.limo/general/2024/05/09/multidim.html

[6]: https://ethereum-magicians.org/t/long-term-l1-execution-layer-proposal-replace-the-evm-with-risc-v/23617

[7]: https://specs.optimism.io/protocol/stage-1.html?highlight=75#stage-1-rollup

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Bybit: $50注册体验金,$30000储值体验金
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink