The three pillars of aggregation, settlement, and execution are in place. How do we measure the value of projects within the AICoin ecosystem?

CN
1 year ago

As the competition and technological advancements continue to reduce the cost of infrastructure, the integration of applications/chains with modular components becomes more feasible.

Written by: Bridget Harris

Translated by: DeepTechFlow

The components of the modular stack are not equal in terms of attention and innovation. While there have been many projects historically innovating on data availability (DA) and sorting layers, it wasn't until recently that the execution layer and settlement layer as part of the modular stack were relatively overlooked.

In the shared sorter market, there are numerous projects competing for market share, such as Espresso, Astria, Radius, Rome, and Madara, as well as RaaS providers like Caldera and Conduit developing shared sorters for building rollups on top of them. Due to the fact that the basic business model of these RaaS providers does not entirely rely on serialized revenue, they are able to offer more favorable fee splits for their rollups. All these products coexist with many choices to run their own sorters and gradually decentralize over time to capture their generated fees.

The sorter market is unique compared to the DA field, which is essentially an oligopoly market composed of Celestia, Avail, and EigenDA. This makes it difficult for small newcomers outside of the three major players to successfully disrupt this market. Projects either leverage the "existing" choice - Ethereum - or opt for an established DA layer, depending on the type of technology stack and alignment they are seeking. While using a DA layer can save a significant amount of costs, outsourcing the sorter is not an obvious choice (from a cost perspective, not security), mainly due to the opportunity cost of forgoing the generated fees. Many still believe that DA will become a commodity, but we have already seen in the crypto space that strong liquidity protection combined with unique (difficult to replicate) underlying technology makes commoditizing a layer in the stack more challenging. Regardless of these debates and changes, there are many DAs and sorters in production ("in short, for some modular stacks, "each service has multiple competitors").

The execution and settlement layers (as well as the extended aggregation layer) have been relatively less developed, but are now being iterated in new ways to better integrate with the rest of the modular stack.

Image

Review of Execution+Settlement Layer Relationship

The execution layer and settlement layer are closely intertwined, where the settlement layer can serve as the place to define the final results of state execution, and can also add enhanced functionality to the results of the execution layer, making the execution layer more robust and secure. In practice, this could mean many different capabilities, such as the settlement layer serving to resolve fraud disputes for the execution layer, validate proofs, and bridge across different execution layers.

It is also worth mentioning that some teams are directly implementing the development of public execution environments in their own protocols. An example is Repyh Labs, which is building an L1 called Delta, which is essentially a design opposite to the modular stack, yet still provides flexibility in a unified environment and has technical compatibility advantages, as the team does not have to spend time manually integrating each part of the modular stack. Of course, the downside is the lack of liquidity and the inability to choose the most suitable modular layer for their design, as well as high costs.

Other teams choose to build L1s that are highly specific to a core function or application. An example is Hyperliquid, which has specifically built an L1 perpetual contract trading platform for flagship native applications. Although their users need to bridge over from Arbitrum, their core architecture does not rely on Cosmos SDK or other frameworks, allowing for iterative customization and super optimization for their primary use case.

Progress of the Execution Layer

Its (previous cycle, still existing) predecessor was a general alt-L1, basically the only feature that beat Ethereum was higher throughput. This means that historically, projects that wanted to significantly improve performance basically had to choose to build their own alternative L1 from scratch - mainly because Ethereum itself did not have this technology. Historically, this simply meant embedding efficiency mechanisms directly into the general protocol. In this cycle, these performance improvements are achieved through modular design and are mostly implemented on the major smart contract platform (Ethereum), allowing both existing and new projects to leverage the new execution layer infrastructure without sacrificing Ethereum's liquidity, security, and community moat.

Now, we also see more mixing and matching of different virtual machines (execution environments) as part of a shared network, allowing developers to have more flexible and better customized execution layers. For example, Layer N enables developers to run general rollup nodes (such as SolanaVM, MoveVM, etc. as execution environments) and application-specific rollup nodes (such as perps dex, orderbook dex) on their shared state machine. They are also committed to achieving full composability and shared liquidity between these different VM architectures, which has historically been a difficult on-chain engineering problem to scale. Each application on Layer N can asynchronously pass messages in consensus without delay, which is a common problem in the crypto space. Each xVM can also use different db architectures, whether it's RocksDB, LevelDB, or a custom sync database built from scratch. The interoperability part works through a "snapshot system" (similar to the Chandy-Lamport algorithm), where chains can asynchronously transition to new blocks without pausing the system. In terms of security, fraud proofs can be submitted in case of incorrect state transitions. With this design, their goal is to minimize execution time while maximizing overall network throughput.

Layer N

To align with these advancements in customization, Movement Labs utilizes the Move language (originally designed by Facebook for networks like Aptos and Sui) for VM/execution. Compared to other frameworks, Move has structural advantages, primarily in security and developer flexibility/expressiveness, which historically have been two major issues in building on-chain today. Importantly, developers only need to write in Solidity and deploy on Movement. To achieve this, Movement has created a fully EVM runtime compatible with bytecode, which can also be used alongside the Move stack. Their rollup - M2, utilizes BlockSTM parallelization, achieving higher throughput while still being able to access Ethereum's liquidity moat (historically, BlockSTM was only used for alt-L1s like Aptos, which clearly lacked EVM compatibility).

MegaETH also drives progress in the execution layer space, particularly through their parallelization engine and in-memory DB, allowing the sorter to store the entire state in memory. In terms of architecture, they leverage:

  • Native code compilation for higher L2 performance (if the contract is more compute-intensive, the program can achieve significant acceleration, and if not compute-intensive, still over 2x acceleration).

  • Relatively centralized block production, but relatively decentralized block validation.

  • Efficient state synchronization, where full nodes do not need to re-execute transactions, but they need to understand state increments so they can be applied to the local database.

  • Merkle tree update structure (updating the tree typically requires a large amount of storage space), their approach is a new triple data structure with high memory and disk efficiency. In memory computation, they can compress the chain state into memory, so when executing transactions, they don't need to access the disk, only memory.

As part of the modular stack, proof aggregation is another design that has been explored and iterated recently - it is defined as a prover that can create a succinct proof from multiple succinct proofs. First, let's get an overall understanding of the aggregation layer and its historical and current trends in crypto technology.

Assigning Value to the Aggregation Layer

Historically, in non-crypto markets, aggregators have gained a smaller market share compared to platforms or markets.

CJ Gustafson

While I'm not sure if this applies to cryptocurrencies, it certainly does for decentralized exchanges, cross-chain bridges, and lending protocols. For example, the total market value of 1inch and 0x (two major dex aggregators) is about $1 billion, only a small fraction of Uniswap's approximately $7.6 billion. This also applies to cross-chain bridges: compared to platforms like Across, the market share of cross-chain bridge aggregators like Li.Fi and Socket/Bungee seems smaller. Although Socket supports 15 different cross-chain bridges, they actually have a similar total bridging volume to Across (Socket-$22 billion, Across-$17 billion), and Across only accounts for a small portion of recent trading volume of Socket/Bungee.

In the lending space, Yearn Finance is the first protocol to act as a decentralized lending yield aggregator, with a current market value of about $2.5 million. In comparison, platform products like Aave (about $14 billion) and Compound (about $5.6 billion) have gained higher valuations and greater relevance over time.

Traditional financial markets operate in a similar manner. For example, the U.S. ICE (Intercontinental Exchange) and CME Group each have market values of about $75 billion, while "aggregators" like Charles Schwab and Robinhood have market values of about $132 billion and $15 billion, respectively. Within Schwab, trades are routed through many venues like ICE and CME, but their trading volume does not proportionally match their market value share. Robinhood has about 1.19 billion options contracts per month, while ICE has about 35 million, and options contracts are not even a core part of Robinhood's business model. Yet, ICE's value on the public market is about 5 times higher than Robinhood's. Therefore, Schwab and Robinhood, acting as application-level aggregation interfaces, route customer order flow through various venues, and despite their significant trading volumes, their valuations are not as high as ICE and CME.

As consumers, we simply assign less value to aggregators.

The aggregation layer may not directly embed into products/platforms/chains, which may not be feasible in cryptocurrencies. If the aggregator is directly tightly integrated into the chain, it is obviously a different architecture, and I am eager to see its development. For example, Polygon's AggLayer allows developers to easily connect their L1 and L2 to a network, aggregating proofs and implementing a unified liquidity layer on-chain using CDK.

The model works similarly to Avail's Nexus interoperability layer, which includes proof aggregation and sorter auction mechanisms to make their DA products more robust. Like Polygon's AggLayer, each chain or rollup integrated with Avail becomes interactable within the existing Avail ecosystem. Additionally, Avail aggregates ordered transaction data from various blockchain platforms and rollups, including Ethereum, all Ethereum rollups, Cosmos chains, Avail rollups, Celestia rollups, and different hybrid structures such as Validiums, Optimiums, and Polkadot parachains. Developers from any ecosystem can build on top of Avail's DA layer without permission, using Avail Nexus for cross-ecosystem proof aggregation and messaging.

Nebra focuses specifically on proof aggregation and settlement, aggregating across different proof systems, for example, aggregating xyz system proofs and abc system proofs to have aggxyzabc_ (rather than aggregating within the proof system). The architecture uses UniPlonK, which standardizes the work of validators on a series of circuits, making the verification of proofs for different PlonK circuits more efficient and feasible. The core is to use zero-knowledge proofs themselves (recursive SNARK) to extend the verification work, which is typically the bottleneck of these systems. For clients, the final step of settlement becomes easier as Nebra can handle all batch aggregation and settlement, with the team only needing to change API contract calls.

Astria is exploring interesting designs around how their shared sorter works with proof aggregation. They leave the execution aspect to rollups themselves, which run execution layer software on a given namespace of the shared sorter, essentially just an "execution API" where rollups accept sequenced layer data. They can also easily add support for validity proofs here to ensure blocks do not violate EVM state machine rules.

Here, products like Astria act as the #1→#2 process (unordered transactions→ordered blocks), the execution layer/aggregator node is the #2→#3, and protocols like Nebra act as the final step #3→#4 (execution blocks→succinct proofs). Nebra (or Aligned Layer) could also be theoretically the fifth step, where proofs are aggregated and then verified afterwards. Sovereign Labs is also researching a concept similar to the final step, where cross-chain bridges based on proof aggregation are at the core of their architecture.

Sovereign Labs

In general, some application layers are starting to own the underlying infrastructure, partly because there may be incentive issues if they don't control the underlying stack, and maintaining just a high-level application may have incentive problems and high user adoption costs. On the other hand, as competition and technological advancements continue to lower infrastructure costs, the cost of integrating applications/application chains with modular components becomes more feasible, and I believe this motivation is much stronger, at least for now.

Through all these innovations - execution layer, settlement layer, aggregation layer, higher efficiency, easier integration, stronger interoperability, and lower costs become more possible. In fact, all of this will bring better applications for users and a better development experience for builders. It's a successful combination that can bring more innovation and faster innovation, and I look forward to the future development.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink