Key points from the 13th AMA of the Ethereum Foundation: EXECUTE precompile, native Rollup, Blob fee model, DA value capture, etc.

CN
链捕手
Follow
11 months ago

Source: reddit Forum Official Website

Translation by: GaryMa Wu Says Blockchain

The Ethereum Foundation research team held the 13th AMA on February 25, 2025, on the reddit forum, where community members could leave questions in the post, and members of the research team would respond. The topics covered included EXECUTE precompiles, native Rollups, Blob fee models, DA value capture, block construction finality, L2 strategic reflections, Verge, VDF, cryptographic memory pools, and academic funding. Wu Says has summarized and compiled the relevant questions/technical points discussed in this AMA as follows:

Question 1: Native Rollup vs. EXECUTE Precompile

Question:

You may have seen Martin Köppelmann's talk, where he proposed the concept of "Native Rollups," similar to our early idea of "Execution Shards."

Additionally, Justin Drake also proposed a "Native Rollup" scheme, suggesting integrating some L2 functionalities into the consensus layer.

This is important to me because current L2s do not meet my expectations for Ethereum— for example, they have issues like admin backdoors. I also don't see how they can solve these problems in the future, as they will eventually become outdated if they cannot be upgraded. What is the current progress of these proposals? Has the community reached a consensus on these ideas, or is there a general belief that Rollups should remain organizationally independent from Ethereum? Are there any other related proposals?

Answer (Justin Drake — Ethereum Foundation):

To avoid confusion, I suggest referring to Martin's proposal directly as "Execution Sharding," a concept that has existed for nearly a decade. The main difference between Execution Sharding and Native Rollups is flexibility. Execution Sharding is a single chain with a preset template, like a complete replica of L1 EVM, typically generated in a fixed number of shards through hard forks from the top down. In contrast, Native Rollups are customizable chains that support flexible ordering, data availability, governance, bridging, and fee settings, generated from the bottom up in a permissionless manner through programmable precompiles. I believe Native Rollups align better with Ethereum's programmable spirit.

We need to provide a path for EVM-equivalent L2s to escape the security committee while maintaining full L1 security and EVM equivalence during L1 hard forks. Execution Sharding, due to its lack of flexibility, struggles to meet the needs of existing L2s. Native Rollups, by introducing something like EXECUTE precompiles (possibly with auxiliary DERIVE precompiles to support derivation functions), may open up new design spaces.

Regarding "community consensus":

Discussions about Native Rollups are still in the early stages. However, I find it not difficult to promote this concept to developers of EVM-equivalent Rollups. If a Rollup can choose to become "native," it is almost a free upgrade provided by L1; why not accept it? Notably, founders of top Rollups like Arbitrum, Base, Namechain, Optimism, Scroll, and Unichain have expressed interest in the 17th sorting meeting and other occasions.

In contrast, I feel promoting Native Rollups is at least 10 times easier than promoting Based Rollups. Based Rollups do not initially appear to be free upgrades—they lose MEV revenue, and a 12-second block time may affect user experience. However, in reality, with incentive-compatible ordering and pre-confirmation mechanisms, they can provide a better experience, just requiring more time to explain and digest.

Technically, EXECUTE precompiles will set gas limits and adopt a dynamic fee mechanism similar to EIP-1559 to prevent DoS attacks. For optimistic L2s, this is not an issue since EXECUTE is only called during fraud proofs. For pessimistic Rollups, data availability (DA) may be more of a bottleneck than execution, as validators can easily verify SNARKs, while home network bandwidth is a fundamental limitation.

Regarding the "current situation":

Looking back, Vitalik proposed the EXECTX precompile in 2017, at a time when terms like "native" or "Rollup" had not yet emerged. Although the timing was early then, in 2025, under the "Native Rollup" trend, the idea of incorporating EVM introspection has regained attention.

Regarding "Should Rollups be organizationally separate from Ethereum":

An ideal endgame model is to view Native Rollups and Based Rollups as smart contracts on L1, just with lower fees. They can enjoy the network effects and security of L1 while having scalability.

For example, ENS is currently an L1 smart contract. In the future, I expect Namechain to become an application chain compatible with both native and Based Rollups, essentially a scalable L1 smart contract. It can maintain organizational independence (like token economics and governance) while deeply integrating into the Ethereum ecosystem.

Embedded Question:

Q: Execution Sharding may be seen as an advantage by many, while native L2s now seem like a suboptimal choice, or rather the only option, without built-in execution sharding as an option.

Answer (Justin Drake):

EXECUTE precompiles are more flexible and powerful than execution sharding. In fact, they can simulate execution sharding, but not vice versa. If someone wants an exact replica of L1 EVM, Native Rollups can also provide that option.

Q: The problem I want to solve is the need for a neutral, trustworthy Rollup with the Ethereum brand, rather than outsourcing responsibility to company-operated Rollups, which seems unable to meet the demand.

Answer (Justin Drake):

This can be achieved through EXECUTE precompiles. For instance, the Ethereum Foundation could use it to deploy 128 "shards."

Q: You mentioned that native L2s are customizable chains that can be generated from the bottom up through precompiles, aligning better with Ethereum's programmable spirit; you also mentioned the need to provide a path for EVM-equivalent L2s to escape the security committee. So, if the base layer does not implement ordering, bridging, and some governance mechanism, can we really escape the security committee? Not keeping up with EVM changes is just one way of becoming outdated. In execution sharding, we solve these issues through hard fork upgrades, benefiting from subjectivocracy. But if built on the upper layer, and the base layer does not interfere with upper layer programs, if a bug occurs, we won't risk forking to save the application layer. Have the teams you interacted with explicitly stated that if Ethereum launches EXECUTE, they will completely remove the security committee and achieve full trustlessness?

Answer (Max Gillett):

The main reason for the existence of security committees is that fraud proof and validity proof systems are very complex, and even one implementation error among validators can be catastrophic. If this complex logic (at least in fraud proofs) is incorporated into L1 consensus, client diversity can reduce risks, which is an important step toward removing the security committee. I believe that if EXECUTE precompiles are designed properly, most "Rollup application logic" (like bridging, messaging, etc.) can be made easy to audit, meeting the standards of DeFi smart contracts—where contracts typically do not require a security committee.

Subjectivocracy is indeed a concise way to upgrade, but it is only practical when there is less competition between shards. Part of the significance of programmable Native Rollups is to allow existing L2s to continue experimenting with ordering, governance, and other factors, ultimately determined by the market. I expect there will be a series of Native Rollups, from zero-governance community-deployed versions (trying to follow L1 EVM) to versions with token governance and experimental precompiles.

Answer (Justin Drake):

Regarding "Do teams commit to complete trustlessness":

What I can confirm is:

  1. Many L2 teams aspire to achieve complete trustlessness.

  2. Mechanisms like EXECUTE are necessary conditions to achieve this goal.

  3. For certain applications (like the minimal execution sharding Martin desires), EXECUTE is sufficient to achieve complete trustlessness.

These three points are enough to drive us toward EXECUTE. Of course, for certain specific L2s, EXECUTE may not be enough, which is also the reason for introducing DERIVE precompiles in early discussions.

Question 2: Optimization of Blob Fee Model

Question:

The Blob fee model seems inadequate and overly simplistic—the minimum fee is only 1 Wei (the smallest unit of ETH). Combined with the pricing mechanism of EIP-1559, if Blob capacity is significantly expanded, we may not see Blob fees rise for a long time. This is not ideal; we want to encourage Blob usage but also do not want the network to bear these data for free. Are there plans to adjust the Blob fee model? If so, how specifically will it be changed? What alternatives or adjustments are being considered?

Answer (Vitalik Buterin):

I believe the protocol should remain simple, avoiding over-optimization for short-term situations while unifying the market logic of execution Gas and Blob Gas. EIP-7706 is a major direction (another direction is to add an independent Gas dimension for Calldata).

I support introducing super-exponential base fee adjustment, which has been repeatedly proposed in different scenarios. If there are consecutive overcapacity blocks, fees will rise at a super-exponential rate, quickly reaching a new balance. With properly set parameters, almost any gas price spike can stabilize within minutes.

Another independent idea is to directly raise the minimum Blob fee. This can shorten peak usage periods (benefiting network stability) and increase more consistent fee destruction.

Answer (Ansgar Dietrichs — Ethereum Foundation):

Your concerns about the Blob fee model are very reasonable, especially during the efficiency improvement phase. Indeed, this is a significant issue related to "L1 value accumulation," but I want to focus on efficiency first.

When developing EIP-4844, we discussed this issue and ultimately decided to set the minimum fee at 1 Wei as a "neutral value" for the initial implementation. Later observations found that this indeed posed challenges for L2 during the transition from non-congested to congested periods. Max Resnick proposed a solution in EIP-7762, suggesting that the minimum fee should be close to zero during non-congested periods but can rise more quickly when demand increases.

This proposal was raised in the later stages of the Pectra fork development, and implementing it may delay the fork. We discussed it at RollCall #9 (an L2 feedback forum) to see if a delay was necessary. L2 feedback indicated that this is no longer an urgent issue, so we decided to maintain the status quo for Pectra. However, if there is strong demand from the ecosystem, future forks may be adjusted.

Answer (Barnabé Monnot — Ethereum Foundation):

Thank you for your question. Indeed, the prior research for EIP-4844 (conducted by u/dcrapis) showed that the transition from 1 Wei to a reasonable market price could have issues, disrupting the market during congestion, which we can observe every time there is Blob congestion. Hence, EIP-7762 was proposed to raise the minimum Blob base fee.

However, even if the base fee is 1 Wei, it does not mean they are "free-riding" on the network. First, Blobs typically require priority fees to compensate block proposers. Second, to determine if it is free, we need to see if Blobs are occupying resources that are not reasonably priced. Some have mentioned that the increased reorganization risk of Blobs (affecting activity) is not compensated, and I have responded to this viewpoint on X.

I believe the discussion should focus on compensating the risk of activity. Some people link the Blob base fee to value accumulation because the base fee is burned (EIP-1559). If the base fee is low, the network accumulates less value; should we raise the base fee to extract more tax from L2? I think this is shortsighted: first, the network needs to define a "reasonable tax rate" (like fiscal policy); second, I believe the growth of the Ethereum economy will bring more value. Arbitrarily raising Blob costs (the raw material for expanding the economy) would be counterproductive.

Answer (Dankrad Feist — Ethereum Foundation):

I want to clarify that concerns about the Blob fee being too low are exaggerated and somewhat shortsighted. In the next 2–3 years, the crypto space may grow significantly, and we should focus less on fee extraction and more on long-term development.

That said, I believe the current pure congestion pricing resource model of Ethereum is not ideal, both in terms of price stability and long-term ETH value accumulation. Once Rollups stabilize, a minimum price model that occasionally reverts to congestion pricing would be better. In the short term, I also support a higher minimum Blob price, which would be a better choice.

Answer (Justin Drake — Ethereum Foundation):

Regarding "Are there plans to redesign":

Yes, EIP-7762 proposes to raise the minimum base fee from 1 Wei to a higher value, such as 2²⁵ Wei.

Answer (Davide Crapis — Ethereum Foundation):

I support raising the minimum base fee, which I mentioned in my initial analysis of 4844. However, at that time, core developers were somewhat opposed. Now, the consensus seems to lean more towards considering this useful. I think a minimum base fee (even if slightly lower) is meaningful and not shortsighted. Future demand will increase, but supply will too, and we may encounter the situation where Blob fees remain at a minimum for an extended period, as we saw in the past year.

More broadly, Blobs also consume network bandwidth and memory pool resources, which are currently not priced. We are exploring related upgrades and may optimize Blob pricing in this direction.

Embedded Question:

Q: I want to emphasize that this is not an attempt to extract maximum value from L2, as this reason is often overlooked whenever Blob pricing is questioned.

Answer:

Thank you for the clarification; that is completely correct. The focus is not on maximizing extraction but on designing a fee mechanism that encourages adoption while fairly pricing resources to facilitate the development of the fee market.

Question 3: DA and L1/L2 Value Capture

Question:

L2 expansion has significantly reduced value accumulation on L1 (Ethereum mainnet), affecting the value of ETH. Besides the claim that "Layer 2 will eventually burn more ETH and handle more transactions," what specific plans do you have to address this issue?

Answer (Justin Drake — Ethereum Foundation):

The revenue of blockchains (whether L1 or L2) primarily comes from two parts: congestion fees (i.e., "base fees") and competitive fees (i.e., MEV, maximum extractable value).

First, regarding competitive fees. With advancements in application and wallet design, I believe MEV will increasingly be captured upstream (by applications, wallets, or users), ultimately being taken almost entirely by entities close to the source of traffic, leaving downstream infrastructure (L1 and L2) with only a small portion. In the long run, chasing MEV may be futile for both L1 and L2.

Now, regarding congestion fees. Historically, the bottleneck for L1 has been EVM execution, with the hardware requirements of consensus participants (such as disk I/O and state growth) limiting execution gas. However, with modern designs using SNARKs or fraud proofs for scaling, execution resources will enter a "post-scarcity era," and the bottleneck will shift to data availability (DA). Since validators rely on limited home network bandwidth, DA is fundamentally scarce. Data availability sampling (DAS) can only provide about 100 times linear scaling, unlike SNARKs or fraud proofs, which can scale almost infinitely.

Therefore, we are focusing on DA economics, which I believe is the only sustainable source of revenue for L1. EIP-4844 (which significantly increases DA supply through Blobs) has been implemented for less than a year. The demand for Blobs has grown over time (mainly driven by induced demand), increasing from an average of 1 Blob/block to 2, then 3. Now that supply is saturated, price discovery has just begun, and low-value "garbage" transactions are being pushed out by transactions with higher economic density.

If DA supply remains stable for a few months, I expect hundreds of ETH to be burned daily through DA. However, currently, L1 is in "growth mode," and the upcoming Pectra hard fork (expected to launch in a few months) will increase the target number of Blobs from 3 to 6. This will overwhelm the Blob fee market, and demand will take months to catch up. In the coming years, as Danksharding is fully rolled out, the supply and demand for DA will play a cat-and-mouse game.

In the long run, I believe DA demand will exceed supply. Supply is limited by home network bandwidth, and the throughput of about 100 home networks may not meet global demand, especially since humans always find new ways to consume bandwidth. I expect that over the next decade, Ethereum will stabilize at 10 million TPS (about 100 transactions per person per day), and even if each transaction only charges $0.001, it could generate $1 billion in revenue daily.

Of course, DA revenue is only part of ETH value accumulation. Issuance and monetary premium are also crucial, and I recommend looking at my Devcon talk in 2022.

Embedded Question:

Q: You said, "If DA supply remains unchanged for a few months, hundreds of ETH will be burned daily through DA." Why do you predict this? The data from the past four months when Blob targets were saturated does not seem to support such growth and payment demand. How do you infer from this data that there will be a significant increase in "high payment demand" within a few months?

Answer (Justin Drake):

My rough model is that "real" economic transactions (like users trading tokens) can bear small fees, such as $0.01 per transaction. I suspect that many "garbage" transactions (robot-generated) are currently being replaced by real transactions. Once the demand for real transactions exceeds DA supply, price discovery will begin.

Answer (Vitalik Buterin):

Many L2s are currently either using off-chain DA or delaying their launch because if they plan to use on-chain DA, they will fill the Blob space on their own, leading to a surge in fees. L1 transactions are daily decisions for many small participants, while L2 Blob space is a long-term decision for a few large participants, so it cannot be simply inferred from the daily market. I believe that even with a significant increase in Blob capacity, there is still a great opportunity for substantial demand willing to pay reasonable fees.

Question: 10 million TPS? That seems unrealistic; can you explain how that is possible?

Answer (Justin Drake):

I recommend looking at my 2022 Devcon talk.

In simple terms:

● L1 raw throughput: 10 TPS

● Rollups: 100 times increase

● Danksharding: 100 times increase

● Nielsen's Law (10 years): 100 times increase

Question: I believe the supply side can achieve this, but what about the demand side?

Answer (Dankrad Feist — Ethereum Foundation):

All blockchains face the value accumulation dilemma, and there is no perfect answer. If Visa charged a fixed fee per transaction without considering the amount, their revenue would drop significantly, but this is the current state of blockchains. The execution layer is slightly better than the data layer, as it can extract priority fees that reflect urgency, while the data layer only has fixed fees.

My suggestion is to increase value first. Without value creation, there can be no accumulation. To achieve this, we should maximize Ethereum's data layer, making alternative DA unnecessary; expand L1 so that high-value applications can run on L1; and encourage projects like EigenLayer to expand the use of ETH as (non-financial) collateral. (Pure financial collateral expansion is more challenging and may exacerbate the death spiral risk.)

Question: "Encouraging EigenLayer" and "making alternative DA unnecessary" are not contradictory, are they? If DA is the only sustainable source of revenue, supporting EigenLayer would risk EIGEN stakers taking away the potential 10 million TPS or $1 billion in daily revenue, right? As an independent validator and EigenLayer operator, it feels like introducing a Trojan horse, which is contradictory.

Answer (Dankrad Feist):

I see EigenLayer more as a decentralized insurance product collateralized by ETH (EigenDA is just one aspect of it). I hope Ethereum's DA expands to the point where EigenDA is unnecessary for financial use cases.

Justin believes that DA is Ethereum's primary source of revenue, which may be incorrect. Ethereum already has something more valuable—high liquidity in the execution layer; DA is just a small part of it (but useful for white-label Ethereum and highly scalable applications). DA has a moat, but its price is far lower than that of the execution layer, so more expansion needs to be provided.

Answer (Justin Drake):

Haha, Dankrad and I have been debating this point for years. I believe the execution layer is indefensible, MEV will be captured by applications, and SNARKs will make execution no longer a bottleneck. Time will tell.

Answer (Dankrad Feist):

SNARKs do not affect this; synchronous state access is the fundamental value and limitation of the execution layer. What a core can execute is irrelevant to SNARKs. I also don't think DA lacks value accumulation, but the charging capacity of the execution layer and DA may differ by 2–3 orders of magnitude. The ones that can charge high prices are likely DA combined with ordering, rather than general DA.

Answer (Justin Drake):

You believe that "competition" (state access limitations or ordering constraints) has value. I agree it has value, but I do not believe it will yield long-term returns for L1 or L2. Applications, wallets, and users close to the source of traffic will recapture competitive value.

L1 DA is irreplaceable for applications pursuing top security and composability. EigenDA is the "most fitting" alternative DA, typically serving as an "overflow" option for high-capacity, low-value applications (like games).

Question 4: The Finality of Block Construction

Question:

How will Ethereum's finality block construction work? The trusted gateway model proposed by Justin seems like a centralized sorter, which may be incompatible with the APS ePBS (Enhanced Proposer-Builder Separation) we expect. The current FOCIL (Forced Inclusion List) design is not suitable for transactions carrying MEV, so block construction seems more inclined towards non-financial applications on L1, potentially pushing applications to choose to run on fast centralized sorter L2s.

Digging deeper, can we design a sorting system that neither maximizes MEV extraction on L1 nor is inefficient? Do all efficient and low-extraction transactions require a principal agent (like a centralized sorter or pre-confirmation/gateway)? Is multi-proposer coordination (MCP) like BRAID still being explored?

Answer (Justin Drake — Ethereum Foundation):

I don't quite understand your point. Let me clarify a few things:

  1. APS (Advanced Proposer Commitment) and ePBS (Enhanced Proposer-Builder Separation) are different design domains; this is the first time I've seen the combination "APS ePBS."

  2. The gateway I understand is similar to a "pre-confirmation relay." If ePBS eliminates the intermediary role of the relay, APS similarly eliminates the need for a gateway. Under APS, L1 executing proposers (if sufficiently specialized) can provide pre-confirmation directly without delegating to a gateway.

  3. Saying "gateways are incompatible with APS" is like saying "relays are incompatible with ePBS" — the design intent is to remove intermediary roles! Gateways are merely a temporary complex measure before APS arrives.

  4. Before APS, I don't understand why the gateway is compared to centralized sorting. Centralized sorting is permissioned, while the gateway market (and the set of L1 proposers delegated to the gateway) is permissionless. Are you saying this because there is only a single gateway sorting in each time slot? By that logic, L1 is also centralized sorting because there is only a single proposer in each time slot. The core of decentralized sorting is rotating transient sorters from a permissionless set.

I believe MCP (multi-proposer coordination) is a suboptimal design. There are several reasons: it introduces centralized multi-block games, complicates fee handling, and requires complex infrastructure (like VDF, Verifiable Delay Functions) to prevent last-bid scenarios.

If MCP is as excellent as Max Resnick claims, we will soon see results on Solana. Max is now working full-time at Solana, and Anatoly also supports MCP to reduce latency; Solana iterates quickly™. By the way, L2 can experiment with MCP permissionlessly, and I welcome that. However, when Max was responsible for MetaMask at Consensys, he couldn't convince the internal L2 Linea to switch to MCP.

Answer (Barnabé Monnot — Ethereum Foundation):

I want to provide an alternative perspective on finality. My preliminary roadmap is as follows, which is already a significant challenge:

● Deploy FOCIL to ensure censorship resistance, starting to decouple scaling limits from local block construction limits.

● Deploy SSF (Single Slot Finality) as soon as possible, aiming to shorten slot times. This requires deploying Orbit to ensure the validator scale aligns with SSF and slot goals.

At the same time, I believe application layer improvements (like BuilderNet, various Rollups, and L1-based Rollups) can ensure block construction innovation and support new applications.

Meanwhile, we should seriously consider different architectures for L1 block construction, including BRAID. The finality may never be conclusive? Who knows. But after deploying FOCIL and SSF/shorter slots, the next steps will be more substantiated.

Question 5: Regrets About Focusing on L2

Question:

Given the sentiment in the community, do you still firmly believe that focusing on L2 is the right choice? If you could go back in time, what would you change?

Answer (Ansgar Dietrichs — Ethereum Foundation):

My view is that Ethereum's strategy has always been to pursue principled architectural solutions. In the long run, Rollup is the only principled solution that can scale blockchain to the global economic base layer. Monolithic chains require "every participant to verify everything," while Rollup significantly reduces the verification burden through "execution compression." Only the latter can scale to billions of users (potentially including AI agents).

Looking back, I feel we were insufficiently focused on the path to achieving the ultimate goal and the intermediate user experience. Even in a Rollup-dominated world, L1 still needs significant scaling, which Vitalik has recently mentioned. We should have realized that continuously scaling L1 while pushing L2 could bring more value to users during the transition period.

I believe Ethereum has long lacked true competition, leading to some complacency. The now more intense competition has exposed these misjudgments and is driving us to deliver better "products," not just theoretically correct solutions.

But to reiterate, Rollup is crucial for achieving the "scaling finality." The specific architecture is still evolving — for example, Justin's exploration of native Rollups indicates that methods are still being adjusted — but the general direction is clearly correct.

Answer (Dankrad Feist — Ethereum Foundation):

I disagree in some respects. If we define Rollup as "scalable DA and execution verification," how is it different from execution sharding?

In fact, we view Rollup more as "white-label Ethereum." Fairly speaking, this has released a lot of energy and funding. If we had focused solely on execution sharding in 2020, we wouldn't have made the current progress in zkEVM and interoperability research.

Technically, we can now achieve any goal — highly scalable L1, extremely scalable sharded chains, or the foundational layer of Rollup. The best for Ethereum is to combine the first and third options.

Question 6: Economic Security Risks of ETH

Question:

If the price of ETH in dollars falls below a certain level, will it threaten the economic security of Ethereum?

Answer (Justin Drake — Ethereum Foundation):

If we want Ethereum to effectively resist attacks — including those from nation-state actors — then high economic security is crucial. Currently, Ethereum has about $80 billion in penalizable economic security (based on 33,644,183 ETH staked, with each ETH around $2,385), which is the highest among all blockchains. In contrast, Bitcoin has only about $10 billion in (non-penalizable) economic security.

Question 7: Plans for Mainnet Scaling and Fee Reduction

Question:

What plans does the Ethereum Foundation have in the coming years to enhance mainnet scalability and reduce transaction fees?

Answer (Vitalik Buterin):

  1. Scale L2: Increase the number of Blobs, such as PeerDAS in Fusaka, to further enhance data capacity.

  2. Optimize interoperability and user experience: Improve cross-L2 interactions, such as the recent Open Intents Framework.

  3. Moderately increase L1 Gas limits.

Question 8: Future Application Scenarios and L1/L2 Collaboration

Question:

What applications and use cases have you designed for Ethereum within the following timeframes:

● Short-term (1 year)

● Medium-term (1–3 years)

● Long-term (4+ years)

How will activities on L1 and L2 collaborate during these timeframes?

Answer (Ansgar Dietrichs — Ethereum Foundation):

This is a broad question, and I will provide some insights focusing on overall trends:

● Short-term (1 year): Focus on stablecoins, as they have fewer regulatory restrictions and are already pioneers in real-world applications, with small-scale cases like Polymarket starting to show impact.

● Medium-term (1–3 years): Expand to more real-world assets (like stocks and bonds), utilizing DeFi modules for seamless interoperability, along with innovations in business process on-chain, governance, prediction markets, etc.

● Long-term (4+ years): Achieve "real-world Ethereum" (the vision of DC Posch), building real products for billions of users and AI agents, with crypto as a facilitator rather than a selling point.

● L1/L2 relationship: The original vision of "L1 only for settlement and rebalancing" needs updating; L1 scaling remains important, while L2 continues to be the main scaling force, and the relationship will further evolve in the coming months.

Answer (Carl Beekhuizen — Ethereum Foundation):

We focus on scaling the entire tech stack rather than designing for specific applications. Ethereum's strength lies in maintaining neutrality over what runs in the EVM, providing the best platform for developers. The core theme is scaling: how to build the most powerful system while maintaining decentralization and censorship resistance.

● Short-term (1 year): Focus on launching PeerDAS, significantly increasing the number of Blobs in blocks; simultaneously improve the EVM, such as quickly launching EOF (EVM Object Format). Research is also ongoing, including statelessness, gas repricing, and EVM zero-knowledge.

● Medium-term (1–3 years): Further expand Blob throughput, launching early research projects like the zkEVM plan from ethproofs.org.

● Long-term (4+ years): Add significant scaling to the EVM (L2 will also benefit), significantly enhance Blob throughput, improve censorship resistance through measures like FOCIL, and accelerate with zero-knowledge technology.

Question 9: Verge Choices and Hash Functions

Question:

Vitalik mentioned in a recent post about Verge that we will soon face three choices: (i) Verkle trees, (ii) STARK-friendly hash functions, (iii) conservative hash functions. Has a decision been made on which path to take?

Answer (Vitalik Buterin):

This is still under intense discussion. Personally, I feel that the atmosphere has slightly leaned towards (ii) in the past few months, but nothing has been finalized yet.

I think these options should be considered in the context of the overall roadmap. The realistic choices might be:

● Option A:

  1. 2025: Pectra, possibly adding EOF

● 2026: Verkle trees

● 2027: L1 execution optimization (delayed execution, multi-dimensional gas, repricing)

● Option B:

● 2025: Pectra, possibly adding EOF

● 2026: L1 execution optimization (delayed execution, multi-dimensional gas, repricing)

● 2027: Initial rollout of Poseidon (initially only encouraging a small number of clients to become stateless, reducing risk)

● 2028: Gradually increase stateless clients

Option B is also compatible with conservative hash functions, but I still prefer a gradual rollout. Even if the hash function is less risky than Poseidon, the proof system still carries a higher risk in the early stages.

Answer (Justin Drake — Ethereum Foundation):

As Vitalik mentioned, recent choices are still under discussion. But from a long-term fundamental perspective, (ii) is clearly the direction. Because (i) lacks post-quantum security, and (iii) is less efficient.

Question 10: VDF Progress

Question:

Question 10: Latest Progress on VDF (Verifiable Delay Functions)

Question:

What is the latest progress on VDF (Verifiable Delay Functions)? I remember a paper from 2024 pointed out some fundamental issues.

Answer (Dmitry Khovratovich — Ethereum Foundation):

Currently, we lack ideal VDF candidates. This situation may change with the development of new models (for analysis) and new constructions (heuristic or non-heuristic). However, at the current level of technology, we cannot confidently say that any scheme cannot be accelerated, for example, by 5 times. Therefore, the consensus is to temporarily set aside VDF.

Question 11: Adjusting Block Time and Finality Time

Question:

From a developer's perspective, is there a preference to gradually shorten block time, reduce finality time, or keep both unchanged until achieving Single Slot Finality (SSF)?

Answer (Barnabé Monnot — Ethereum Foundation):

I am not sure if there is a compromise path to shorten finality time between now and SSF. I believe that launching SSF is the best opportunity to simultaneously shorten finality delays and slot times. We can adjust based on the existing protocol, but if we can achieve SSF in the short term, it may not be worth spending effort on the current protocol.

Answer (Francesco D’Amato — Ethereum Foundation):

Before SSF, we can definitely shorten block time (for example, to 6–9 seconds), but it is best to first confirm whether this is compatible with SSF and other aspects of the roadmap (like ePBS). Currently, I understand that SSF should be compatible, but that does not mean we should do it immediately; the design of SSF is not yet fully determined.

Question 12: FOCIL and Encrypted Memory Pools

Question:

Why not skip FOCIL (Forced Inclusion List) and go directly to encrypted memory pools?

Answer (Justin Drake — Ethereum Foundation):

Unfortunately, encrypted memory pools are insufficient to guarantee forced inclusion. This has already been reflected in the BuilderNet based on TEE (Trusted Execution Environment) running on the mainnet. For example, Flashbots reviews OFAC transactions from its BuilderNet blocks. TEE (which can access unencrypted transaction content) can easily filter. More advanced memory pools based on MPC (Multi-Party Computation) or FHE (Fully Homomorphic Encryption) have similar issues, where sorters can require zero-knowledge proofs to exclude transactions they do not want to include.

More broadly, encrypted memory pools and FOCIL are orthogonal and complementary. Encrypted memory pools focus on privacy inclusion, while FOCIL focuses on forced inclusion. They also operate at different layers of the tech stack: FOCIL is built-in infrastructure for L1, while encrypted memory pools operate off-chain or at the application layer.

Answer (Julian Ma — Ethereum Foundation):

Although both FOCIL and encrypted memory pools aim to enhance censorship resistance, they are not complete substitutes but rather complementary. Therefore, FOCIL is not a transition to encrypted memory pools. The main reason there is no encrypted memory pool now is the lack of satisfactory proposals, although efforts are ongoing. If deployed now, it would impose honest assumptions on Ethereum's activity.

FOCIL should be deployed because it has a robust proposal, the community has confidence in it, and its implementation is relatively lightweight. When combined, encrypted transactions in FOCIL can limit the economic damage of reordering to users.

Question 13: Gas and Blob Limit Voting

Question:

Will you allow the number of Blobs to be determined by staker voting like the Gas limit? Major players might collude to raise the limit, pushing out small home stakers with insufficient hardware or bandwidth, leading to staking centralization and undermining decentralization. Moreover, if these increases are unlimited, will it become harder to oppose through hard forks? If hardware bandwidth requirements are determined by voting, what is the significance of setting these requirements? The interests of stakers may not align with the overall network, is such voting appropriate?

Answer (Vitalik Buterin):

I personally think that (i) allowing the number of Blobs to be determined by staker voting, and (ii) having clients coordinate updates to default Gas voting parameters more frequently, is a good idea. This is equivalent to the "Blob Parameter Only (BPO) fork" functionality but is more robust. If clients do not upgrade in time or implement incorrectly, it will not lead to consensus failure. Many supporters of BPO forks are actually referring to this idea.

Question 14: Features of Fusaka and Glamsterdam Upgrades

Question:

What features should the Fusaka and Glamsterdam upgrades include to significantly advance the roadmap?

Answer (Francesco D’Amato — Ethereum Foundation):

As mentioned, Fusaka will greatly enhance data availability (DA). I hope Glamsterdam achieves a similar leap at the execution layer (EL), as that is when the room for improvement in EL is the greatest (with over a year to determine direction). Current repricing efforts may bring significant changes in Glamsterdam, but that is not the only option.

Additionally, FOCIL can be seen as a scalability EIP, which can better separate local block construction from validator demands, and combined with its goals of enhancing censorship resistance and reducing reliance on altruistic behavior, it will push Ethereum forward. These are my current priorities, but certainly not all.

Answer (Barnabé Monnot — Ethereum Foundation):

Fusaka focuses on PeerDAS, which is crucial for L2 scaling, and almost no one wants other features to delay it. I hope Glamsterdam includes FOCIL and Orbit to pave the way for SSF.

The above leans towards consensus layer (CL) and DA, but Glamsterdam should also have an effort at the execution layer (EL) to significantly advance L1 scaling. Discussions on the specific feature set are ongoing.

Question 15: Enforcing L2 Decentralization

Question:

Given the slow progress of L2 decentralization, can we "enforce" L2 to adopt Stage 1 or Stage 2 decentralization through EIP?

Answer (Vitalik Buterin):

Native Rollup (like EXECUTE precompile) has somewhat achieved this. L2 can still freely ignore it and code in backdoors, but they can use the simple, high-security proof systems built into L1. L2s pursuing EVM compatibility are likely to choose this option.

Question 16: The Greatest Survival Risk for Ethereum

Question:

What is the greatest survival risk facing Ethereum?

Answer (Vitalik Buterin):

Superintelligent AI could lead to a single entity controlling most of the world's resources and power, rendering blockchains irrelevant.

Question 17: Impact of Alt-DA on ETH Holders

Question:

Is Alt-DA (data availability outside the ETH mainnet) a vulnerability or a feature for ETH holders in the short, medium, and long term?

Answer (Vitalik Buterin):

I still stubbornly hope for a focused research and development team to explore ideal Plasma-like designs, allowing chains dependent on Ethereum L1 to provide stronger (though imperfect) security guarantees for users when using alternative DA. There are many overlooked opportunities here that can enhance user security and also be valuable to DA teams.

Question 18: Future Outlook for Hardware Wallets

Question:

What is your vision for the future of hardware wallets?

Answer (Justin Drake — Ethereum Foundation):

In the future, most hardware wallets will be based on mobile secure enclaves rather than standalone devices like Ledger USB. Account abstraction has made infrastructure like Passkeys available. I hope to see native integration (like in Apple Pay) within this decade.

Answer (Vitalik Buterin):

Hardware wallets need to "truly achieve security" from several aspects:

  1. Secure hardware: Based on open-source, verifiable stacks (like IRIS), reducing the risk of backdoors and side-channel attacks.

  2. Interface security: Providing sufficient transaction information to prevent computers from tricking users into signing unintended content.

  3. Popularity: Ideally, create a device that serves as both a crypto wallet and for other secure purposes, encouraging more people to acquire and use it.

Question 19: L1 Gas Limit Goals for 2025

Question:

What is the Gas limit goal for L1 in 2025?

Answer (Toni Wahrstätter — Ethereum Foundation):

There are differing opinions on the Gas limit, but the core question is: should we expand L1 by raising the Gas limit, or focus on L2 and use technologies like DAS to increase Blobs?

Vitalik's recent blog discussed the rationale for moderately expanding L1. However, raising the Gas limit has trade-offs:

● Higher hardware requirements

● Growth in state and historical data, increasing the burden on nodes

● Greater bandwidth demands

On the other hand, the Rollup-centric vision aims to enhance scalability without increasing node requirements. PeerDAS (in the short term) and complete DAS (in the medium to long term) will release significant potential while keeping resource control.

I would not be surprised if after the Pectra hard fork (in April), validators push the Gas limit to 60 million. However, in the long run, the focus on scaling may shift to DAS solutions rather than simply increasing the Gas limit.

Question 20: Transition to Beam Client

Question:

If the Ethereum Beam client experiment (or its renamed version) is successful, and several usable implementations are available within 2–3 years, will there need to be a phase where the current PoS and Beam PoS run in parallel and both receive staking rewards, similar to the transition from PoW to PoS?

Answer (Vitalik Buterin):

I think we can proceed with an immediate upgrade.

The reason for using a dual-chain during the merge was:

● PoS was not fully tested, and time was needed for the ecosystem to operate to ensure a safe switch.

● PoW is reorgable, and the switching mechanism needs to be robust.

PoS has finality, and most of the infrastructure (like staking) can carry over. We can switch the validation rules from the beacon chain to the new design through a hard fork. There may be a brief period of insufficient economic finality at the transition point, but this is an acceptable small cost.

Answer (Justin Drake — Ethereum Foundation):

I assume the upgrade from the beacon chain to Beam will be handled like a normal fork, without needing a "merge 2.0." A few thoughts:

  1. Consensus participants (ETH stakers) are the same on both sides of the fork, unlike the merge where the group changed and there was a risk of miner interference.

  2. The "clocks" on both sides of the fork are consistent, unlike the probabilistic slot to fixed slot transition from PoW to PoS.

  3. Infrastructure like libp2p, SSZ, and anti-slashing databases are mature and can be reused.

  4. There is no rush to disable PoW to avoid additional issuance; we can take the time for due diligence and quality assurance (multiple testnet runs) to ensure a smooth mainnet fork.

Question 21: Academic Funding Plans for 2025

Question:

The Ethereum Foundation has launched a $2 million academic funding program for 2025. What research areas are prioritized? How will the results be integrated into the Ethereum roadmap?

Answer (Fredrik Svantes — Ethereum Foundation):

The protocol security team is interested in the following areas:

● P2P Security: Many vulnerabilities are related to network layer DoS attacks (such as libp2p or devp2p), and improvements in this area are valuable.

● Fuzz Testing: While testing has been done on the EVM and consensus layer clients, there is room for deeper exploration in areas like the network layer.

● Supply Chain Risks: Understanding the current dependency risks of Ethereum.

● LLM Applications: How large language models can enhance protocol security (such as auditing code and automating fuzz testing).

Answer (Alexander Hicks — Ethereum Foundation):

In terms of integration, we continuously engage with academia, fund research, and participate in it. The Ethereum system is unique, and academic research does not always have a direct impact on the roadmap (for example, consensus protocols are quite unique, making it difficult to directly translate academic results), but this is evident in areas like zero-knowledge proofs.

The academic funding program is part of our internal and external research, and this time it explores interesting topics that may not directly impact the roadmap. For instance, I have added formal verification and AI-related topics. Currently, the practicality of AI in Ethereum tasks is yet to be validated, but I hope to drive progress in the next year or two. This is a great opportunity to assess the current state, improve methods, and attract researchers from cross-disciplinary fields who may not know much about Ethereum but are interested.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink