Developers rebut Vitalik: The premise is incorrect, RISC-V is not the best choice.

CN
5 hours ago

This article is from: Ethereum developer levochka.eth

Compiled by | Odaily Planet Daily (@OdailyChina); Translator | Azuma (@azuma_eth)

Editor's Note:

Yesterday, Ethereum co-founder Vitalik published a radical article about the upgrade of Ethereum's execution layer (see "Vitalik's Radical New Article: Execution Layer Scaling 'No Pain, No Gain', EVM Must Be Iterated in the Future"), in which he mentioned the hope of replacing EVM with RISC-V as the virtual machine language for smart contracts.

This article immediately stirred up a storm in the Ethereum developer community, with several technical leaders expressing differing views on the proposal. Shortly after the article was published, leading Ethereum developer levochka.eth wrote a lengthy rebuttal to Vitalik's views in the comments, arguing that Vitalik made incorrect assumptions about the proof system and its performance, and that RISC-V may not be the best choice for balancing "scalability" and "maintainability."

Below is the original content from levochka.eth, compiled by Odaily Planet Daily.

Developer Rebuts Vitalik: Incorrect Assumptions, RISC-V is Not the Best Choice

Please do not do this.

This plan is unreasonable because you have made incorrect assumptions about the proof system and its performance.

Verification of Assumptions

As I understand it, the main argument of the proposal is "scalability" and "maintainability."

First, I want to discuss maintainability.

In fact, all RISC-V zk-VMs need to use "precompiles" to handle compute-intensive operations. The list of precompiles for SP1 can be found in the documentation of Succinct, and you will find that it covers almost all important "computational" opcodes in EVM.

Therefore, any modification to the underlying cryptographic primitives will require writing and auditing new "circuits" for these precompiles, which is a serious limitation.

Indeed, if the performance is good enough, the maintainability of the "non-EVM" part in the execution client may be relatively easy. I am not sure if the performance will be good enough, but I have low confidence in this aspect for the following reasons:

  • "State tree computation" can indeed be significantly accelerated through friendly precompiles (like Poseidon).

  • However, whether "deserialization" can be handled in an elegant and maintainable way is still unclear.

  • Additionally, there are some tricky details (like gas metering and various checks) that may belong to "block evaluation time," but should actually be classified as "non-EVM" parts, and these parts often face more maintenance pressure.

Secondly, regarding scalability.

I need to reiterate that RISC-V cannot handle EVM loads without using precompiles, absolutely not.

Thus, the statement in the original text that "the final proof time will mainly be dominated by the current precompile operations" is technically correct but overly optimistic — it assumes that there will be no precompiles in the future, whereas in fact (in this future scenario), precompiles will still exist and will be completely consistent with the compute-intensive opcodes in EVM (such as signatures, hashes, and potentially large number modular operations).

Regarding the "Fibonacci" example, it is difficult to make a judgment without delving into the very low-level details, but its advantages come at least partly from:

  • The difference between "interpretation" and "execution overhead";

  • Loop unrolling (reducing RISC-V's "control flow," whether Solidity can achieve this is still uncertain, but even a single opcode will still generate a lot of control flow/memory access due to interpretation overhead);

  • Using smaller data types;

Here I want to point out that to achieve the advantages of point 1 and point 2, the "interpretation overhead" must be eliminated. This aligns with the philosophy of RISC-V, but this is not the RISC-V we are currently discussing; it is a similar "RISC-V" that needs to have certain additional capabilities, such as supporting the concept of contracts.

The Problem Arises

So, there are some issues here.

  • To improve maintainability, you need a RISC-V that can compile EVM (with precompiles) — this is basically the current situation.

  • To improve scalability, you need something completely different — a new architecture that may resemble RISC-V, which can understand the concept of "contracts," be compatible with Ethereum runtime constraints, and can directly execute contract code (without "interpretation overhead").

I now assume you are referring to the second case (as the rest of the article seems to imply this). I need to remind you that all code outside this environment will still be written in the current RISC-V zkVM language, which has significant implications for maintenance.

Other Possibilities

We can compile bytecode from high-level EVM opcodes. The compiler is responsible for ensuring that the generated program maintains invariants, such as avoiding "stack overflow." I would like to see this demonstrated in a regular EVM. A correctly compiled SNARK can be provided alongside contract deployment instructions.

We can also build a "formal proof" to demonstrate that certain invariants are preserved. As far as I know, this approach (rather than "virtualization") has been used in certain browser contexts. By generating such a formal proof SNARK, you can achieve similar results.

Of course, the simplest option is to just go for it…

Building a Minimal "Chained" MMU

You may have implicitly expressed this in your article, but allow me to remind you: to eliminate virtualization overhead, you must directly execute the compiled code — this means you need to somehow prevent contracts (now executable programs) from writing to the kernel (non-EVM implementation) memory.

Therefore, we need some kind of "Memory Management Unit" (MMU). The paging mechanism of traditional computers may be unnecessary because the "physical" memory space is nearly infinite. This MMU should be as streamlined as possible (since it operates at the same level of abstraction as the architecture itself), but certain functions (like "atomicity of transactions") can be moved to that layer.

At this point, the provable EVM will become a kernel program running on this architecture.

RISC-V May Not Be the Best Choice

Interestingly, under all these conditions, the best "Instruction Set Architecture" (ISA) suitable for this task may not be RISC-V, but something similar to EOF-EVM, because:

  • "Small" opcodes actually lead to a lot of memory accesses, and existing proof methods struggle to handle them efficiently.

  • To reduce branching overhead, we demonstrated in our recent paper Morgana how to prove "static jumps" (similar to EOF) code at precompile-level performance.

My suggestion is to build a new architecture that is friendly to proofs, equipped with a minimal MMU, allowing contracts to run as separate executable files. I do not think it should be RISC-V, but rather a new ISA optimized for SNARK protocol constraints, and even an ISA that partially inherits a subset of EVM opcodes may be better — as we know, whether we like it or not, precompiles will always exist, so RISC-V does not bring any simplification here.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX:注册返20%
链接:https://www.okx.com/zh-hans/join/aicoin20
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink