Vitalik on the Possible Future of Ethereum (6): The Splurge

CN
1 year ago

In the design of the Ethereum protocol, about half of the content involves different types of EVM improvements, while the rest consists of various niche topics, which is the meaning of "the Splurge."

**Original Title: *Possible futures of the Ethereum protocol, part 6: The Splurge***

Author: Vitalik Buterin

Translation: zhouzhou, BlockBeats

The following is the original content (reorganized for readability):

Some things are hard to categorize, and in the design of the Ethereum protocol, many "details" are crucial to Ethereum's success. In fact, about half of the content involves different types of EVM improvements, while the rest consists of various niche topics, which is the meaning of "the Splurge."

2023 Roadmap: The Splurge

The Splurge: Key Goals

  • Transform the EVM into a high-performance and stable "final state"
  • Introduce account abstraction into the protocol, allowing all users to enjoy safer and more convenient accounts
  • Optimize transaction fee economics, improving scalability while reducing risks
  • Explore advanced cryptography to significantly improve Ethereum in the long term

EVM Improvements

What problems does it solve?

The current EVM is difficult to analyze statically, making it challenging to create efficient implementations, formally verify code, and further extend it. Additionally, the EVM is inefficient and struggles to implement many forms of advanced cryptography unless explicitly supported through precompiles.

What is it, and how does it work?

The first step in the current EVM improvement roadmap is the EVM Object Format (EOF), which is planned to be included in the next hard fork. EOF is a series of EIPs that specify a new version of EVM code with many unique features, the most notable being:

  • Separation of code (executable but not readable from the EVM) and data (readable but not executable)
  • Prohibition of dynamic jumps, allowing only static jumps
  • EVM code can no longer observe fuel-related information
  • Introduction of a new explicit subroutine mechanism

Structure of EOF Code

The Splurge: EVM Improvements (Continued)

Legacy contracts will continue to exist and can be created, although they may eventually be gradually deprecated (and possibly forcibly converted to EOF code). New contracts will benefit from the efficiency improvements brought by EOF—first through slightly smaller bytecode via subroutine features, followed by EOF-specific new functionalities or reduced gas costs.

After the introduction of EOF, further upgrades become easier, with the most developed being the EVM Modular Arithmetic Extension (EVM-MAX). EVM-MAX creates a set of new operations specifically for modular arithmetic and places them in a new memory space that cannot be accessed by other opcodes, enabling optimizations such as Montgomery multiplication.

A newer idea is to combine EVM-MAX with Single Instruction Multiple Data (SIMD) features. SIMD has been a concept in Ethereum for a long time, first proposed by Greg Colvin's EIP-616. SIMD can be used to accelerate many forms of cryptography, including hash functions, 32-bit STARKs, and lattice-based cryptography. The combination of EVM-MAX and SIMD makes these two performance-oriented extensions a natural pairing.

A rough design of a combined EIP will start with EIP-6690, then:

  • Allow (i) any odd number or (ii) any power of 2 up to 2768 as the modulus
  • For each EVM-MAX opcode (addition, subtraction, multiplication), add a version that no longer uses 3 immediates x, y, z, but instead uses 7 immediates: xstart, xskip, ystart, yskip, zstart, zskip, count. In Python code, these opcodes would work similarly to:

for i in range(count):

mem[zstart + zskip * count] = op(

mem[xstart + xskip * count],

mem[ystart + yskip * count]

)

In actual implementation, this would be processed in parallel.

  • Possibly add XOR, AND, OR, NOT, and SHIFT (including cyclic and non-cyclic), at least for powers of 2 modulus. At the same time, add ISZERO (which pushes output to the EVM main stack), which will be powerful enough to implement elliptic curve cryptography, small field cryptography (like Poseidon, Circle STARKs), traditional hash functions (like SHA256, KECCAK, BLAKE), and lattice-based cryptography. Other EVM upgrades may also be implemented, but have received less attention so far.
  • EOF: https://evmobjectformat.org/
  • EVM-MAX: https://eips.ethereum.org/EIPS/eip-6690
  • SIMD: https://eips.ethereum.org/EIPS/eip-616

Remaining Work and Trade-offs

Currently, EOF is planned to be included in the next hard fork. While it is always possible to remove it at the last minute—features have been temporarily removed in previous hard forks—doing so would face significant challenges. Removing EOF means that any future upgrades to the EVM would need to be done without EOF, which is possible but may be more difficult.

The main trade-off for the EVM lies in L1 complexity versus infrastructure complexity. EOF requires a significant amount of code to be added to the EVM implementation, and static code checks are relatively complex. However, in exchange, we can simplify high-level languages, simplify EVM implementations, and gain other benefits. It can be argued that prioritizing the roadmap for continuous improvement of Ethereum L1 should include and build upon EOF.

An important task that remains is to implement functionalities similar to EVM-MAX plus SIMD and benchmark the gas consumption of various cryptographic operations.

How does it interact with other parts of the roadmap?

L1 adjusts its EVM to make it easier for L2 to adjust accordingly; if both do not synchronize their adjustments, it may lead to incompatibilities and adverse effects. Additionally, EVM-MAX and SIMD can reduce the gas costs of many proof systems, making L2 more efficient. It also makes it easier to replace more precompiles with EVM code that can perform the same tasks, which may not significantly impact efficiency.

Account Abstraction

What problems does it solve?

Currently, transactions can only be verified in one way: ECDSA signatures. Initially, account abstraction aimed to go beyond this, allowing the verification logic of accounts to be any EVM code. This can enable a range of applications:

  • Transition to quantum-resistant cryptography
  • Rotate old keys (widely regarded as a recommended security practice)
  • Multi-signature wallets and social recovery wallets
  • Use one key for low-value operations and another key (or set of keys) for high-value operations

Allowing privacy protocols to operate without relays significantly reduces their complexity and eliminates a key central dependency point.

Since account abstraction was proposed in 2015, its goals have also expanded to include a large number of "convenience goals," such as allowing an account without ETH but with some ERC20 to pay gas with ERC20. Below is a summary chart of these goals:

MPC (Multi-Party Computation) is a technology that has been around for 40 years, used to split keys into multiple parts and store them on multiple devices, generating signatures using cryptographic techniques without directly combining these key parts.

EIP-7702 is a proposal planned for introduction in the next hard fork, resulting from the growing recognition of providing account abstraction conveniences to benefit all users (including EOA users), aimed at improving the experience for all users in the short term and avoiding a split into two ecosystems.

This work began with EIP-3074 and ultimately formed EIP-7702. EIP-7702 provides the "convenience features" of account abstraction to all users, including today's EOAs (Externally Owned Accounts, i.e., accounts controlled by ECDSA signatures).

From the chart, it can be seen that while some challenges (especially the "convenience" challenge) can be addressed through incremental technologies like multi-party computation or EIP-7702, the primary security goal of the original account abstraction proposal can only be achieved by backtracking and addressing the original issue: allowing smart contract code to control transaction verification. The reason it has not been achieved so far is the challenge of implementing it securely.

What is it, and how does it work?

The core of account abstraction is simple: allowing smart contracts to initiate transactions, not just EOAs. The entire complexity comes from implementing this in a way that is friendly to maintaining a decentralized network and preventing denial-of-service attacks.

A typical key challenge is the multiple failure problem:

If the verification function of 1000 accounts relies on a single value S, and the current value S makes all transactions in the memory pool valid, then a single transaction that flips the value of S could invalidate all other transactions in the memory pool. This allows an attacker to send garbage transactions to the memory pool at a very low cost, clogging the resources of network nodes.

After years of effort aimed at expanding functionality while limiting denial-of-service (DoS) risks, the solution for achieving "ideal account abstraction" has finally been reached: ERC-4337.

The working principle of ERC-4337 is to divide the processing of user operations into two phases: validation and execution. All validations are processed first, followed by all executions. In the memory pool, user operations are only accepted if the validation phase involves only their own account and does not read environmental variables. This prevents multiple failure attacks. Additionally, strict gas limits are enforced on the validation step.

ERC-4337 was designed as an additional protocol standard (ERC) because, at the time, Ethereum client developers were focused on the Merge and had no extra energy to handle other functionalities. This is why ERC-4337 uses an object called user operation instead of a conventional transaction. However, we have recently realized the need to write at least part of it into the protocol.

There are two key reasons:

  1. The inherent inefficiency of EntryPoint as a contract: each bundle has a fixed overhead of about 100,000 gas, plus several thousand gas for each user operation.
  2. The necessity to ensure Ethereum properties: guarantees created by inclusion lists need to be transferred to account abstraction users.

Additionally, ERC-4337 expands on two functionalities:

  • Paymasters: This allows one account to pay fees on behalf of another account, which violates the rule that the validation phase can only access the sender's account itself, thus introducing special handling to ensure the security of the paymaster mechanism.
  • Aggregators: This supports the functionality of signature aggregation, such as BLS aggregation or SNARK-based aggregation. This is necessary for achieving the highest data efficiency on Rollups.
  • Talk on the history of account abstraction: https://www.youtube.com/watch?v=iLf8qpOmxQc
  • ERC-4337: https://eips.ethereum.org/EIPS/eip-4337
  • EIP-7702: https://eips.ethereum.org/EIPS/eip-7702
  • BLSWallet code (using aggregation functionality): https://github.com/getwax/bls-wallet
  • EIP-7562 (account abstraction written into the protocol): https://eips.ethereum.org/EIPS/eip-7562
  • EIP-7701 (protocol account abstraction based on EOF): https://eips.ethereum.org/EIPS/eip-7701

Remaining Work and Trade-offs

The main issue that needs to be resolved now is how to fully integrate account abstraction into the protocol. The recently popular proposal for account abstraction written into the protocol is EIP-7701, which implements account abstraction on top of EOF. An account can have a separate code section for validation, and if the account sets this code section, it will be executed during the validation step of transactions from that account.

The charm of this approach lies in its clear indication of two equivalent perspectives on local account abstraction:

  1. Treating EIP-4337 as part of the protocol
  2. A new type of EOA where the signature algorithm is the execution of EVM code

If we start by setting strict boundaries on the complexity of executable code during validation—disallowing access to external state, and even the initial gas limits set being too low to be effective for quantum-resistant or privacy-preserving applications—then the security of this approach is very clear: simply replacing ECDSA verification with EVM code execution that requires a similar amount of time.

However, over time, we need to relax these boundaries, as allowing privacy-preserving applications to work without relays and quantum resistance are very important. To this end, we need to find more flexible ways to address denial-of-service (DoS) risks without requiring the validation step to be extremely minimal.

The main trade-off seems to be "quickly writing a solution that satisfies fewer people" versus "waiting longer for a potentially more ideal solution," with the ideal approach possibly being some kind of hybrid method. One hybrid method is to write some use cases faster and leave more time to explore other use cases. Another approach is to first deploy a more ambitious version of account abstraction on L2. However, the challenge here is that L2 teams need to be confident in the work of adopting proposals to be willing to implement them, especially to ensure that L1 and/or other L2s can adopt compatible solutions in the future.

Another application we need to consider is key storage accounts, which store account-related state on L1 or dedicated L2 but can be used on L1 and any compatible L2. Effectively achieving this may require L2 to support opcodes such as L1SLOAD or REMOTESTATICCALL, but this also requires the account abstraction implementation on L2 to support these operations.

How does it interact with other parts of the roadmap?

Inclusion lists need to support account abstraction transactions. In practice, the demand for inclusion lists is very similar to the demand for decentralized memory pools, although there is slightly more flexibility for inclusion lists. Additionally, the implementation of account abstraction should achieve coordination between L1 and L2 as much as possible. If we expect most users to use key storage Rollups in the future, the design of account abstraction should be based on this.

EIP-1559 Improvements

What problems does it solve?

EIP-1559 was activated on Ethereum in 2021, significantly improving the average block inclusion time.

Waiting Time

However, the current implementation of EIP-1559 is not perfect in several ways:

  1. The formula is slightly flawed: it does not target 50% of blocks but rather about 50-53% of full blocks, depending on variance (this relates to what mathematicians call the "arithmetic-geometric mean inequality").
  2. It does not adjust quickly enough in extreme cases.

The formula used for blobs (EIP-4844) is specifically designed to address the first issue and is overall more concise. However, neither EIP-1559 itself nor EIP-4844 attempts to solve the second issue. Therefore, the current situation is a chaotic intermediate state involving two different mechanisms, and there is a viewpoint that both need improvement over time.

Additionally, there are other weaknesses in Ethereum resource pricing unrelated to EIP-1559, but they can be addressed through adjustments to EIP-1559. One major issue is the difference between average and worst-case scenarios: resource prices in Ethereum must be set to handle the worst case, where the total gas consumption of a block occupies one resource, but actual average usage is far below this, leading to inefficiency.

What is Multidimensional Gas, and how does it work?

The solution to these inefficiencies is Multidimensional Gas: setting different prices and limits for different resources. This concept is technically independent of EIP-1559, but the existence of EIP-1559 makes it easier to implement this solution. Without EIP-1559, optimally packing a block with multiple resource constraints would be a complex multidimensional knapsack problem. With EIP-1559, most blocks will not reach full capacity on any resource, so a simple algorithm like "accept any transaction that pays enough fees" is sufficient.

Currently, we have multidimensional gas for execution and data blocks; in principle, we can extend it to more dimensions: such as calldata (transaction data), state reads/writes, and state size expansion.

EIP-7706 introduces a new gas dimension specifically for calldata. At the same time, it simplifies the multidimensional gas mechanism by unifying the three types of gas into one (EIP-4844 style) framework, thus also addressing the mathematical flaws of EIP-1559. EIP-7623 is a more precise solution that addresses the resource issues between average and worst-case scenarios by more strictly limiting maximum calldata without introducing an entirely new dimension.

A further research direction is to address the update rate issue, seeking a faster base fee calculation algorithm while retaining the key invariants introduced by the EIP-4844 mechanism (i.e., over the long term, average usage is exactly close to the target value).

  • EIP-1559 FAQ: EIP-1559 FAQ
  • Empirical analysis on EIP-1559: Empirical analysis
  • Proposed improvements for allowing rapid adjustments: Proposed improvements
  • Section on base fee mechanism in EIP-4844 FAQ: EIP-4844 FAQ
  • EIP-7706: EIP-7706
  • EIP-7623: EIP-7623
  • Multidimensional Gas: Multidimensional gas

What are the remaining work and trade-offs?

The main trade-offs of Multidimensional Gas are twofold:

  1. Increased protocol complexity: Introducing Multidimensional Gas will make the protocol more complex.
  2. Increased complexity of the optimal algorithm required to fill blocks: The best algorithm for achieving block capacity will also become complex.

The protocol complexity is relatively small for calldata, but for those gas dimensions within the EVM (such as storage reads and writes), the complexity will increase. The issue is that not only do users set gas limits, but contracts also set limits when calling other contracts. Currently, the only way they set limits is unidimensional.

A simple solution is to make Multidimensional Gas available only within EOF, as EOF does not allow contracts to set gas limits when calling other contracts. Non-EOF contracts need to pay for the gas costs of all types when performing storage operations (for example, if SLOAD occupies 0.03% of the block storage access gas limit, then non-EOF users will also be charged 0.03% of the execution gas limit fee).

Further research on Multidimensional Gas will help understand these trade-offs and find the ideal balance.

How does it interact with other parts of the roadmap?

The successful implementation of Multidimensional Gas can significantly reduce resource usage in certain "worst-case" scenarios, thereby alleviating the pressure to optimize performance to support demands such as STARKed hash-based binary trees. Setting a clear target for state size growth will make it easier for client developers to plan and estimate requirements in the future.

Verifiable Delay Functions (VDFs)

What problem does it solve?

Currently, Ethereum uses RANDAO-based randomness to select proposers, where the randomness works by requiring each proposer to reveal their pre-committed secret and mixing each revealed secret into the randomness.

Each proposer thus has "1 bit of manipulation power": they can change the randomness by not showing up (at a cost). This approach is reasonable for finding proposers, as the situation where you give up one opportunity to gain two new proposing opportunities is quite rare. However, this is not ideal for on-chain applications that require randomness. Ideally, we should find a more robust source of randomness.

What is a VDF, and how does it work?

A Verifiable Delay Function is a function that can only be computed sequentially and cannot be accelerated through parallelization. A simple example is repeated hashing: for i in range(10**9): x = hash(x). The output, verified using a SNARK, can serve as a random value.

The idea is to select based on information available at time T, while the output at time T remains unknown: the output will only be available at some point after T when someone has fully run the computation. Since anyone can run the computation, there is no possibility of concealing the result, and thus no ability to manipulate the result.

The main risk of Verifiable Delay Functions is accidental optimization: someone finds a way to run the function faster than expected, thereby manipulating the information they reveal at time T.

Accidental optimization can occur in two ways:

  1. Hardware acceleration: Someone creates an ASIC that runs the computation loop faster than existing hardware.
  2. Accidental parallelization: Someone finds a way to run the function faster through parallelization, even if doing so requires 100 times the resources.

The task of creating a successful VDF is to avoid these two issues while keeping efficiency practical (for example, a hashing-based method has the problem that real-time SNARK proof of hashes requires heavy hardware). Hardware acceleration is typically addressed by a public interest participant creating and distributing near-optimal VDF ASICs.

  • VDF research website: vdfresearch.org
  • Thoughts on attacks against VDF in Ethereum, 2018: Thinking on attacks
  • Attacks against the proposed VDF MinRoot: Attacks against MinRoot

What are the remaining work and trade-offs?

Currently, there is no VDF construction that fully meets the requirements of Ethereum researchers in all aspects. More work is needed to find such a function. If found, the main trade-off is whether to incorporate it: a simple trade-off between functionality and protocol complexity and security risks.

If we consider the VDF to be secure, but it ultimately turns out to be insecure, then depending on its implementation, security will degrade to the RANDAO assumption (1 bit of manipulation power per attacker) or a slightly worse case. Therefore, even if the VDF fails, it will not break the protocol, but it will undermine applications or any new protocol features that strongly rely on it.

How does it interact with other parts of the roadmap?

VDF is a relatively self-contained component of the Ethereum protocol, and besides increasing the security of proposer selection, it also has applications in (i) on-chain applications that rely on randomness and (ii) cryptographic memory pools, although creating cryptographic memory pools based on VDF still relies on additional cryptographic discoveries that have yet to occur.

One point to remember is that, considering hardware uncertainty, there will be some "margin" between the VDF output and the need. This means that information will be available several blocks earlier. This can be an acceptable cost, but it should be considered in single-slot finality or committee selection designs.

Obfuscation and One-Time Signatures: The Distant Future of Cryptography

What problem does it solve?

One of Nick Szabo's most famous papers is his 1997 essay on the "God Protocol." In this paper, he pointed out that many multiparty applications rely on "trusted third parties" to manage interactions. In his view, the role of cryptography is to create a simulated trusted third party that does the same work without needing to trust any specific participant.

So far, we have only partially realized this ideal. If what we need is merely a transparent virtual computer whose data and computations cannot be shut down, censored, or tampered with, and privacy is not the goal, then blockchains can achieve this, although their scalability is limited.

If privacy is the goal, then until recently, we could only develop some specific protocols for particular applications: digital signatures for basic authentication, ring signatures and linkable ring signatures for raw anonymity, identity-based encryption for more convenient encryption under specific assumptions about trusted issuers, and blind signatures for Chaumian electronic cash, etc. This approach requires a lot of work for each new application.

In the 2010s, we first caught a glimpse of a different and more powerful approach based on programmable cryptography. Instead of creating a new protocol for each new application, we could use powerful new protocols—specifically ZK-SNARKs—to add cryptographic guarantees to arbitrary programs.

ZK-SNARKs allow users to prove any claim about the data they hold, with the proof (i) being easy to verify and (ii) not leaking any data other than the claim itself. This is a huge advancement in both privacy and scalability, and I liken it to the impact of transformers in artificial intelligence. Thousands of people have been applying specific work, suddenly replaced by this general solution capable of handling an unexpectedly wide range of problems.

However, ZK-SNARKs are just the first of three extremely powerful general primitives. These protocols are so powerful that when I think of them, they remind me of a set of very powerful cards in "Yu-Gi-Oh!"—the card game and TV show I played as a child: the Egyptian God Cards.

The Egyptian God Cards are three extremely powerful cards, and the legend says that the process of creating these cards could be deadly, and their power made them banned in duels. Similarly, in cryptography, we also have this set of three Egyptian God protocols:

What are ZK-SNARKs, and how do they work?

ZK-SNARKs are one of the three protocols we already have, with a high level of maturity. In the past five years, significant improvements in prover speed and developer friendliness have made ZK-SNARKs the cornerstone of Ethereum's scalability and privacy strategies. However, ZK-SNARKs have an important limitation: you need to know the data to prove it. Each state in a ZK-SNARK application must have a unique "owner" who must be present to approve reads or writes to that state.

The second protocol that does not have this limitation is Fully Homomorphic Encryption (FHE), which allows you to perform any computation on encrypted data without looking at the data. This enables you to compute on users' data, starting from the users' interests, while keeping the data and algorithms private.

It also allows you to scale voting systems like MACI to achieve nearly perfect security and privacy guarantees. For a long time, FHE was considered too inefficient for practical use, but it has now finally become efficient enough to see real applications emerging.

Cursive is an application that utilizes both party computation and Fully Homomorphic Encryption (FHE) for privacy-preserving common interest discovery.

However, FHE also has its limitations: any technology based on FHE still requires someone to hold the decryption key. This can be an M-of-N distributed setup, and you can even use Trusted Execution Environments (TEEs) to add a second layer of protection, but this is still a limitation.

Next is the third protocol, which is more powerful than the combination of the first two: Indistinguishability Obfuscation. While this technology is still far from maturity, as of 2020, we have obtained theoretically effective protocols based on standard security assumptions and have recently begun to implement them.

Indistinguishability Obfuscation allows you to create an "encrypted program" that performs arbitrary computations while hiding all internal details of the program. For example, you could put a private key into an obfuscation program that only allows you to use it to sign primes and distribute this program to others. They can use this program to sign any prime but cannot extract the key. However, its capabilities go far beyond this: combined with hashing, it can be used to implement any other cryptographic primitive and more functionalities.

The only thing that indistinguishability obfuscation cannot do is prevent itself from being copied. However, for this, there are even more powerful technologies on the horizon, although they rely on everyone having quantum computers: Quantum One-Shot Signatures.

By combining obfuscation and one-time signatures, we can build an almost perfect trustless third party. The only goal that cannot be achieved solely through cryptography, which still requires blockchain to ensure, is censorship resistance. These technologies can not only make Ethereum itself more secure but also enable the construction of more powerful applications on top of it.

To better understand how these primitives add additional capabilities, we take voting as a key example. Voting is an interesting problem because it requires satisfying many complex security properties, including very strong verifiability and privacy. While voting protocols with strong security properties have existed for decades, we can self-impose greater difficulty by requiring designs that can handle arbitrary voting protocols: such as runoff voting, pairwise restricted runoff funding, cluster-matching runoff funding, and so on. In other words, we want the "tallying" step to become an arbitrary program.

First, let's assume we publicly place the voting results on the blockchain. This gives us public verifiability (anyone can verify whether the final results are correct, including tallying rules and eligibility rules) and censorship resistance (it is impossible to prevent people from voting). But we lack privacy.

Next, we add ZK-SNARKs, and now we have privacy: each vote is anonymous while ensuring that only authorized voters can vote, and each voter can only vote once.

Then, we introduce the MACI mechanism, where votes are encrypted to the decryption key of a central server. The central server is responsible for the tallying process, including eliminating duplicate votes and publishing ZK-SNARK proof of the results. This retains the previous guarantees (even if the server cheats!), but if the server is honest, it also adds a coercion-resistant guarantee: users cannot prove how they voted, even if they want to. This is because while users can prove their vote, they cannot prove that they did not vote to offset that vote. This prevents bribery and other attacks.

We run the tallying in FHE and then perform N/2-of-N threshold decryption calculations. This elevates the coercion-resistant guarantee to N/2-of-N instead of 1-of-1.

We obfuscate the tallying program and design the obfuscation program so that it can only output results when authorized, with authorization being a proof of blockchain consensus, some form of proof of work, or a combination of both. This makes the coercion-resistant guarantee nearly perfect: in the case of blockchain consensus, 51% of validators must collude to break it; in the case of proof of work, even if everyone colludes, retallying with a different subset of voters to attempt to extract the behavior of a single voter will be extremely costly. We can even make slight random adjustments to the final tallying results to further increase the difficulty of extracting the behavior of a single voter.

We add one-time signatures, a primitive that relies on quantum computing, allowing signatures to be used only once for a specific type of information. This makes the coercion-resistant guarantee truly perfect.

Indistinguishability obfuscation also supports other powerful applications. For example:

  1. Decentralized Autonomous Organizations (DAOs), on-chain auctions, and other applications with arbitrary internal secret states.
  2. Truly universal trusted setups: someone can create an obfuscation program containing keys and run any program, providing output by inputting hash(key, program) into the program. Given such a program, anyone can input program 3 into themselves, combining the program's pre-existing key with their own key, thereby extending the setup. This can be used to generate a 1-of-N trusted setup for any protocol.
  3. Verification of ZK-SNARKs requires only one signature: achieving this is very simple: set up a trusted environment where someone creates an obfuscation program that will only use the key to sign messages under valid ZK-SNARK conditions.
  4. Encrypted memory pools: encrypted transactions become very simple, allowing transactions to be decrypted only when a certain on-chain event occurs in the future. This can even include successfully executed verifiable delay functions (VDFs).

With one-time signatures, we can protect the blockchain from finality reversals due to 51% attacks, although censorship attacks may still be possible. Primitives similar to one-time signatures make quantum currency possible, thus solving the double-spending problem without a blockchain, although many more complex applications still require a chain.

If these primitives can become efficient enough, then most applications in the world could achieve decentralization. The main bottleneck will be verifying the correctness of the implementation.

Here are some links to existing research:

  • Indistinguishability Obfuscation Protocol (2021): Indistinguishability Obfuscation
  • How Obfuscation Can Help Ethereum: How Obfuscation Can Help Ethereum
  • First Known Construction of One-Shot Signatures: First Known Construction of One-Shot Signatures
  • Attempted Implementation of Obfuscation (1): Attempted Implementation of Obfuscation (1)
  • Attempted Implementation of Obfuscation (2): Attempted Implementation of Obfuscation (2)

What work remains to be done, and what are the trade-offs?

There is still much work to be done; indistinguishability obfuscation is still very immature, and the running speed of candidate constructions is astonishingly slow (if not more), making them unusable in applications. Indistinguishability obfuscation is known for being "theoretically" polynomial time, but in practical applications, its running time can be longer than the lifespan of the universe. Recent protocols have alleviated running time somewhat, but the overhead is still too high for regular use: one implementer estimates a running time of one year.

Quantum computers do not even exist yet: all the constructions you see on the internet are either prototypes that cannot perform more than 4-bit operations or are not substantive quantum computers at all; while they may have quantum components, they cannot perform meaningful computations like Shor's algorithm or Grover's algorithm. Recent signs suggest that "real" quantum computers are not far from us. However, even if "real" quantum computers appear soon, it may take decades for ordinary people to use quantum computers on their laptops or phones, until the day when powerful institutions can break elliptic curve cryptography.

For indistinguishability obfuscation, a key trade-off lies in the security assumptions, with more radical designs using special assumptions. These designs often have more realistic running times, but special assumptions can sometimes ultimately be broken. Over time, we may better understand lattices, leading to assumptions that are harder to break. However, this path is riskier. A more conservative approach is to stick with protocols whose security can be proven to reduce to "standard" assumptions, but this may mean we need to wait longer to obtain protocols that run fast enough.

How does it interact with other parts of the roadmap?

Extremely powerful cryptography could fundamentally change the game, for example:

  1. If the verification of ZK-SNARKs we obtain is as simple as signatures, we may no longer need any aggregation protocols; we could verify directly on-chain.
  2. One-time signatures could mean more secure proof-of-stake protocols.
  3. Many complex privacy protocols may only require a privacy-preserving Ethereum Virtual Machine (EVM) as a substitute.
  4. Encrypted memory pools become easier to implement.

Initially, the benefits will appear at the application layer, as Ethereum's L1 essentially needs to remain conservative on security assumptions. However, the use at the application layer alone could be disruptive, just like the emergence of ZK-SNARKs.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink