I. Outlook
1. Summary of Macroeconomic Aspects and Future Predictions
Last week, the Trump administration announced a 25% tariff on all non-American-made cars, a decision that has once again triggered panic in the market. This tariff policy could not only lead to a significant increase in the prices of imported cars and parts but may also provoke retaliatory measures from trade partners, further escalating international trade tensions. Moving forward, investors need to closely monitor the progress of trade negotiations and changes in the global economic landscape.
2. Market Fluctuations in the Cryptocurrency Industry and Warnings
Last week, the cryptocurrency market experienced a significant pullback triggered by macroeconomic fears, with previously accumulated gains being rapidly reversed within just a few days. This volatility primarily stems from the renewed uncertainty in the global macroeconomic environment. Looking ahead to this week, the market's focus will be on whether Bitcoin and Ethereum prices can effectively break below previous lows. This level is not only a crucial technical support but also a key psychological barrier for the market. On April 2, the U.S. officially began imposing reciprocal tariffs. If this move does not further exacerbate market panic, the cryptocurrency market may see an opportunity for a phase of bottom-fishing. However, investors must remain vigilant and closely monitor market dynamics and relevant indicators.
3. Industry and Sector Hotspots
Cobo and YZI led the investment, with Hashkey participating in the modular L1 chain abstraction platform Particle, which greatly enhances user experience and developer efficiency by simplifying cross-chain operations and payments, but it also faces challenges related to liquidity and centralized management; Skate, focused on seamless connections for mainstream VM application layer protocols, offers an innovative and efficient solution. By providing a unified application state, simplifying cross-chain task execution, and ensuring security, Skate significantly reduces complexity for developers and users in a multi-chain environment; Arcium is a fast, flexible, and low-cost infrastructure aimed at providing access to cryptographic computing through blockchain. The innovative decentralized storage solution Walrus raised a record $140 million.
II. Market Hotspot Sectors and Potential Projects of the Week
1. Performance of Potential Sectors
1.1. Analyzing the Features of Skate, the Seamless Connection Protocol for Mainstream VM Applications Led by Hashkey
Skate is an infrastructure layer focused on DAPPs, allowing users to seamlessly interact with their native chains by connecting to all virtual machines (EVM, TonVM, SolanaVM). For users, Skate provides applications that can run in their preferred environment. For developers, Skate manages the complexity of cross-chain interactions and introduces a new application paradigm, enabling applications to be built across all chains and virtual machines while using a unified application state to serve all chains.
Architecture Overview
The infrastructure of Skate consists of three foundational layers:
- Skate's Central Chain: The central hub that handles all logical operations and stores application states.
- Pre-confirmation AVS: An AVS deployed on Eigenlayer that facilitates the secure delegation of re-staked ETH to Skate's executor network. It serves as the primary source of real data, ensuring that executors perform the required operations on the target chain.
- Executor Network: A network composed of executors responsible for executing operations defined by applications. Each application has its own set of executors.
As the central chain, Skate maintains and updates a shared state, providing instructions to connected peripheral chains, which only respond to the call data provided by Skate. This process is implemented through our executor network, where each executor is a registered AVS operator responsible for executing these tasks. In the event of dishonest behavior, we can rely on the pre-confirmation AVS as a source of real data to penalize violating operators.
User Flow
Skate is primarily driven by intents, where each intent encapsulates the key information expressing the operation the user wishes to execute, while also defining the necessary parameters and boundaries. Users only need to sign the intent through their local wallets and interact solely on that chain, creating a user-native environment.
The intent flow is as follows:
- Source Chain: Users will initiate operations on the TON/Solana/EVM chains by signing intents.
- Skate: Executors receive the intent and call the processIntent function. This creates a task that encapsulates the key information required for the executor to perform the task. At the same time, the system triggers a TaskSubmitted event.
AVS validators actively listen for the TaskSubmitted event and verify the content of each task. Once consensus is reached in the pre-confirmation AVS, the forwarder will issue the signatures required to execute the task. - Target Chain: Executors call the executeTask function on the Gateway contract.
The Gateway contract will verify whether the task has been validated by the AVS, confirming the validity of the forwarder's signature before executing the functions defined in the task.
The calldata of the function call is executed, and the intent is marked as complete.
Commentary
Skate provides an innovative and efficient solution for cross-chain operations of decentralized applications. By offering a unified application state, simplifying cross-chain task execution, and ensuring security, Skate significantly reduces complexity for developers and users in a multi-chain environment. Its flexible architecture and easy integration features give it broad application prospects in the multi-chain ecosystem. However, to achieve comprehensive implementation in high concurrency and multi-chain ecosystems, Skate still needs to continue efforts in performance optimization and cross-chain compatibility.
1.2. How the Decentralized Cryptographic Computing Network Arcium, Backed by Coinbase, NGC, and Long Hash, Achieves Its Vision
Arcium is a fast, flexible, and low-cost infrastructure aimed at providing access to cryptographic computing through blockchain. Arcium is a cryptographic supercomputer that offers large-scale cryptographic computing services, supporting developers, applications, and entire industries to compute on fully encrypted data using a trustless, verifiable, and efficient framework. Through secure multi-party computation (MPC) technology, Arcium provides scalable and secure cryptographic solutions for Web2 and Web3 projects, supported by a decentralized network.
Architecture Overview
The Arcium network is designed to provide secure distributed confidential computing for various applications, from artificial intelligence to decentralized finance (DeFi) and beyond. It is based on advanced cryptographic technologies, including multi-party computation (MPC), enabling trustless and verifiable computation without the need for central authority intervention.
- Multi-Party Execution Environments (MXEs)
MXEs are dedicated, isolated environments for defining and securely executing computational tasks. They support parallel processing (as multiple clusters can execute computations for different MXEs simultaneously), thereby enhancing throughput and security.
MXEs are highly configurable, allowing computing clients to define security requirements, encryption schemes, and performance parameters according to their needs. While individual computational tasks are executed within specific clusters of Arx nodes, multiple clusters can be associated with a single MXE. This ensures that even if some nodes in the cluster are offline or overloaded, computational tasks can still be reliably executed. By pre-defining these configurations, clients can highly flexibly customize the environment based on specific use case requirements.
- arxOS
arxOS is the distributed execution engine within the Arcium network, responsible for coordinating the execution of computational tasks, driving Arx nodes and clusters. Each node (similar to cores in a computer) provides computational resources to execute tasks defined by MXEs.
- Arcis (Arcium's Developer Framework)
Arcis is a Rust-based developer framework that enables developers to build applications on the Arcium infrastructure and supports all of Arcium's multi-party computation (MPC) protocols. It includes a Rust-based framework and compiler.
- Arx Node Clusters (Running arxOS)
arxOS is the distributed execution engine within the Arcium network, coordinating the execution of computational tasks. Each node (similar to cores in a computer) provides computational resources to execute tasks defined by MXEs. The clusters provide customizable trust models, supporting dishonest majority protocols (initially Cerberus) and "honest but curious" protocols (such as Manticore). Future additions will include other protocols (including honest majority protocols) to support more use case scenarios.
Chain-Level Enforcement
All state management and coordination of computational tasks are handled on-chain through the Solana blockchain, which serves as the consensus layer coordinating the operations of Arx nodes. This ensures fair reward distribution, enforcement of network rules, and alignment of nodes with the current state of the network. Tasks are queued in a decentralized memory pool architecture, where on-chain components help determine which computational tasks have the highest priority, identify misconduct, and manage execution order.
Nodes ensure compliance with network rules by staking collateral. If misconduct or deviation from the protocol occurs, the system implements a penalty mechanism, punishing violating nodes through slashing to maintain the integrity of the network.
Commentary
The following are key features that make the Arcium network a cutting-edge secure computing solution:
- Trustless, Arbitrary Cryptographic Computation: The Arcium network achieves trustless computation through its Multi-Party Execution Environments (MXEs), allowing arbitrary computations on encrypted data without exposing the content of the data.
- Execution Guarantee: Through a blockchain-based coordination system, the Arcium network ensures that all computations within the MXEs are reliably executed. Arcium's protocol enforces compliance through staking and penalty mechanisms, requiring nodes to commit staked collateral, which will be penalized if they deviate from the agreed execution rules, thus ensuring the correct completion of each computational task.
- Verifiability and Privacy Protection: Arcium provides a verifiable computation mechanism that allows participants to publicly audit the correctness of computation results, enhancing the transparency and reliability of data processing.
- On-Chain Coordination: The network utilizes the Solana blockchain to manage node scheduling, compensation, and performance incentives. Staking, penalties, and other incentive mechanisms are fully executed on-chain, ensuring the decentralization and fairness of the system.
- Developer-Friendly Interface: Arcium offers a dual interface: one is a web-based graphical interface for non-technical users, and the other is a Solana-compatible SDK for developers to create customized applications. This design allows confidential computing to provide convenience for ordinary users while meeting the needs of highly technical developers.
- Multi-Chain Compatibility: Although initially based on Solana, the Arcium network was designed with multi-chain compatibility in mind, capable of supporting access from different blockchain platforms.
Through these features, the Arcium network aims to redefine how sensitive data is processed and shared in a trustless environment, promoting the broader application of secure multi-party computation (MPC).
1.3. What are the characteristics of the modular L1 chain abstraction platform Particle, led by Cobo and YZI, with Hashkey participating twice?
Particle Network completely simplifies the Web3 user experience through wallet abstraction and chain abstraction. With its wallet abstraction SDK, developers can guide users into smart accounts with a single click via social login.
Additionally, Particle Network's chain abstraction technology stack, with Universal Accounts as its flagship product, enables users to have unified accounts and balances across each chain.
The real-time wallet abstraction product suite of Particle Network consists of three key technologies:
- User Onboarding: With a simplified registration process, users can more easily enter the Web3 ecosystem, enhancing the user experience.
- Account Abstraction: Through account abstraction, users' assets and operations are no longer dependent on a single chain, improving flexibility and convenience for cross-chain operations.
- Upcoming Product: Chain Abstraction: Chain abstraction will further enhance cross-chain capabilities, supporting users in seamlessly operating and managing assets across multiple blockchains, creating a unified on-chain account experience.
Architecture Analysis
Particle Network coordinates and completes cross-chain transactions in a high-performance EVM execution environment through its Universal Accounts and three core functions:
- Universal Accounts
Provide a unified account state and balance, allowing users to manage assets and operations across all chains through a single account. - Universal Liquidity
Ensures seamless transfer and use of funds between different chains through cross-chain liquidity pools. - Universal Gas
Simplifies the user experience by automatically managing the gas fees required for cross-chain transactions.
These three core functions work together, enabling Particle Network to unify interactions across all chains and achieve automated fund transfers through atomic cross-chain transactions, helping users achieve their goals without manual intervention.
Universal Accounts: Particle Network's Universal Accounts aggregate token balances across all chains, allowing users to utilize assets from all chains in any decentralized application (dApp) as if using a single wallet.
This functionality is achieved through Universal Liquidity. They can be understood as specialized smart accounts deployed and coordinated across all chains. Users simply need to connect their wallets to create and manage Universal Accounts, with the system automatically assigning management permissions. The wallet connected by the user can be generated through Particle Network's Modular Smart Wallet-as-a-Service for social login or can be a regular Web3 wallet like MetaMask, UniSat, Keplr, etc.
Developers can easily integrate Universal Account functionality into their dApps by implementing Particle Network's universal SDK, empowering cross-chain asset management and operations.
Universal Liquidity: Universal Liquidity is the technical architecture that supports aggregating balances across all chains. Its core function is coordinated by Particle Network through atomic cross-chain transactions and exchanges. These atomic transaction sequences are driven by Bundler nodes, executing UserOperations and completing operations on the target chain.
Universal Liquidity relies on a network of Liquidity Providers (also known as fillers) to move intermediary tokens (such as USDC and USDT) between chains through token pools. These liquidity providers ensure that assets can flow smoothly across chains.
For example, suppose a user wants to purchase an NFT priced in ETH using USDC on the Base chain. In this scenario:
- Particle Network aggregates the user's USDC balances across multiple chains.
- The user uses their assets to purchase the NFT.
- Upon confirming the transaction, Particle Network automatically converts USDC to ETH and purchases the NFT.
These additional on-chain operations require only a few seconds of processing time and are transparent to the user, who does not need to intervene manually. In this way, Particle Network simplifies the management of cross-chain assets, making cross-chain transactions and operations seamless and automated.
Universal Gas: By unifying balances across chains through Universal Liquidity, Particle Network also addresses the fragmentation issue of gas tokens.
In the past, users needed to hold various gas tokens in different wallets to pay gas fees on different chains, which posed significant usability barriers. To solve this problem, Particle Network uses its native Paymaster, allowing users to pay gas fees with any token from any chain. These transactions will ultimately be settled in Particle Network's L1 using the chain's native token (PARTI).
Users do not need to hold PARTI tokens to use Universal Accounts, as their gas tokens will be automatically converted and used for settlement. This makes cross-chain operations and payments more convenient, eliminating the need for users to manage multiple gas tokens.
Commentary
Advantages:
- Unified Management of Cross-Chain Assets: Universal Accounts and Universal Liquidity allow users to manage and use assets across different chains without worrying about asset fragmentation or the complexity of cross-chain transfers.
- Simplified User Experience: Through social login and Modular Smart Wallet-as-a-Service, users can easily access Web3, lowering the entry barrier.
- Automation of Cross-Chain Transactions: Atomic cross-chain transactions and Universal Gas make the automatic conversion and payment of assets and gas tokens seamless, enhancing user convenience.
- Developer-Friendly: Developers can easily integrate cross-chain functionality into their dApps using Particle Network's universal SDK, reducing the complexity of cross-chain integration.
Disadvantages:
- Dependence on Liquidity Providers: The network of liquidity providers (for cross-chain transfers of USDC and USDT) needs broad participation to ensure smooth liquidity. If liquidity pools are insufficient or provider participation is low, it may affect the smoothness of transactions.
- Centralization Risks: Particle Network relies to some extent on its native Paymaster to handle gas fee payments and settlements, which may introduce centralization risks and dependencies.
- Compatibility and Popularity: Although it supports multiple wallets (like MetaMask, Keplr, etc.), compatibility between different chains and wallets may still pose a significant challenge to user experience, especially for smaller chains or wallet providers.
Overall, Particle Network greatly enhances user experience and developer efficiency by simplifying cross-chain operations and payments, but it also faces challenges related to liquidity and centralized management.
2. Detailed Explanation of Projects to Watch This Week
2.1. Detailed Explanation of Walrus, an Innovative Decentralized Storage Solution Led by A16z, Raising a Record $140 Million This Month
Introduction
Walrus is an innovative solution for decentralized big data storage. It combines fast linear decodable erasure codes, capable of scaling to hundreds of storage nodes, achieving extremely high resilience with lower storage overhead; and utilizes the next-generation public chain Sui as the control plane, managing everything from the lifecycle of storage nodes to the lifecycle of big data, as well as the economics and incentive mechanisms, eliminating the need for a complete custom blockchain protocol.
The core of Walrus is a new encoding protocol called Red Stuff, which employs an innovative two-dimensional (2D) encoding algorithm based on fountain codes. Unlike RS encoding, fountain codes primarily rely on XOR or other very fast operations on large data blocks, avoiding complex mathematical computations. This simplicity allows for encoding large files in a single transmission, significantly speeding up processing times. The 2D encoding of Red Stuff enables the recovery of lost fragments through bandwidth proportional to the amount of lost data. Additionally, Red Stuff incorporates authenticated data structures to prevent malicious clients, ensuring the consistency of stored and retrieved data.
Walrus operates in epochs, with each epoch managed by a committee of storage nodes. All operations within an epoch can be sharded by blobid, achieving high scalability. The system facilitates the writing process of blobs by encoding data into primary and secondary fragments, generating Merkle commitments, and distributing these fragments to storage nodes. The reading process involves collecting and verifying fragments, with the system providing best-effort and incentive paths to address potential system failures. To ensure that the availability of reading and writing blobs is not interrupted during the natural turnover of participants in the permissionless system, Walrus features an efficient committee reconfiguration protocol.
Another key innovation of Walrus is its method of storage proof, which is a mechanism to verify whether storage nodes indeed store the data they claim to hold. Walrus addresses the scalability challenges associated with these proofs by incentivizing all storage nodes to hold fragments of all stored files. This complete replication allows for a new storage proof mechanism that challenges the storage nodes as a whole rather than individually for each file. Consequently, the cost of proving file storage grows logarithmically with the number of stored files, rather than linearly as in many existing systems.
Finally, Walrus introduces a staking-based economic model that combines rewards and penalties to align incentives and enforce long-term commitments. The system includes a pricing mechanism for storage resources and write operations, along with a token governance model for parameter adjustments.
Technical Analysis
Red Stuff Encoding Protocol
Current industry encoding protocols achieve low overhead factors and extremely high guarantees but are still unsuitable for long-term deployment. The main challenge is that in a long-running large-scale system, storage nodes frequently encounter failures, losing their fragments and needing to be replaced. Additionally, in a permissionless system, even if storage nodes have sufficient incentives to participate, natural turnover among nodes will occur.
Both scenarios lead to a significant amount of data needing to be transmitted across the network, equivalent to the total amount of stored data, to recover lost fragments for new storage nodes. This is extremely costly. Therefore, the team aims for the recovery cost during node turnover to be proportional only to the amount of data that needs to be recovered and to decrease inversely with the number of storage nodes (n).
To achieve this, Red Stuff encodes large data blocks in a two-dimensional (2D) manner. The primary dimension is equivalent to the RS encoding used in previous systems. However, to efficiently recover fragments, Walrus also encodes in the secondary dimension. Red Stuff is based on linear erasure codes and the Twin-code framework, which provides efficient recovery of erasure-coded storage in fault-tolerant settings suitable for environments with trusted writers. The team has adapted this framework for Byzantine fault-tolerant environments and optimized it for single storage node clusters, which will be described in detail below.
- Encoding
Our starting point is to split large data blocks into f + 1 fragments. This is not merely encoding repair fragments but first adds a dimension during the splitting process:
(a) Two-Dimensional Primary Encoding. The file is split into 2f + 1 columns and f + 1 rows. Each column is encoded as an independent blob containing 2f repair symbols. Then, the extended part of each row becomes the primary fragment for the corresponding node.
(b) Two-Dimensional Secondary Encoding. The file is split into 2f + 1 columns and f + 1 rows. Each row is encoded as an independent blob containing f repair symbols. Then, the extended part of each column becomes the secondary fragment for the corresponding node.
Figure 2: 2D Encoding/ Red Stuff
The original blob is split into f + 1 primary fragments (vertical in the figure) and 2f + 1 secondary fragments (horizontal in the figure). Figure 2 illustrates this process. Ultimately, the file is split into (f + 1)(2f + 1) symbols, which can be visualized in a [f + 1, 2f + 1] matrix.
Given this matrix, repair symbols are generated in two dimensions. We take each of the 2f + 1 columns (each of size f + 1) and extend it to n symbols, making the number of rows in the matrix n. We assign each row as a primary fragment for a node (see Figure 2a). This nearly triples the amount of data we need to send. To provide efficient recovery for each fragment, we also extend the initial [f + 1, 2f + 1] matrix, expanding each row from 2f + 1 symbols to n symbols (see Figure 2b) using our encoding scheme. In this way, we create n columns, each assigned as a secondary fragment for the corresponding node.
For each fragment (primary and secondary), W also calculates the commitment of its symbols. For each primary fragment, the commitment includes all symbols in the extended row; for each secondary fragment, the commitment includes all values in the extended column. In the final step, the client creates a list of commitments for these fragments, which serves as the blob commitment.
- Writing Protocol
The writing protocol of Red Stuff follows the same pattern as the RS encoding protocol. The writer W first encodes the blob and creates a fragment pair for each node. A fragment pair i is the pairing of the i-th primary and secondary fragments. There are a total of n = 3f + 1 fragment pairs, equivalent to the number of nodes.
Next, W sends the commitments of all fragments to each node, along with the corresponding fragment pairs. Each node checks whether its fragment in the fragment pair matches the commitment, recalculates the blob's commitment, and replies with a signed confirmation. Once 2f + 1 signatures are collected, W generates a certificate and publishes it on-chain to prove that the blob will be available.
In a theoretical asynchronous network model, assuming reliable transmission, all correct nodes will eventually receive a fragment pair from an honest writer. However, in practical protocols, the writer may need to stop retransmitting. After collecting 2f + 1 signatures, it is safe to stop retransmission, ensuring that at least f + 1 correct nodes (selected from the 2f + 1 responding nodes) hold the fragment pair of the blob.
(a) Node 1 and Node 3 jointly hold two rows and two columns.
In this case, Node 1 and Node 3 hold two rows and two columns of the file, respectively. The data fragments held by each node are assigned to different rows and columns in the 2D encoding, ensuring that data is distributed and redundantly stored across multiple nodes for high availability and fault tolerance.
(b) Each node sends its row/column intersection with Node 4's column/row to Node 4 (in red). Node 3 needs to encode this row.
In this step, Node 1 and Node 3 send their row/column intersections with Node 4's column/row to Node 4. Specifically, Node 3 needs to encode its held row to intersect with Node 4's data fragment and pass it to Node 4. This way, Node 4 can receive the complete data fragment and perform recovery or verification work. This process ensures data integrity and redundancy, allowing other nodes to recover data even if some nodes fail.
(c) Node 4 uses its f + 1 symbols on its column to recover the complete secondary fragment (in green). Then, Node 4 sends the recovered column intersection to the rows of other recovery nodes.
In this step, Node 4 utilizes its f + 1 symbols on its column to recover the complete secondary fragment. The recovery process is based on the intersection of data, ensuring the efficiency of data recovery. Once Node 4 has recovered its secondary fragment, it sends the recovered column intersection to other nodes that are recovering, assisting them in recovering their row data. This interaction ensures the smooth progress of data recovery, and collaboration among multiple nodes can accelerate the recovery process.
(d) Node 4 uses its f + 1 symbols on its row and all secondary symbols sent by other honest recovery nodes (in green) (these symbols should be at least 2f, plus the 1 symbol recovered in the previous step) to recover its primary fragment (in deep blue).
At this stage, Node 4 not only uses its f + 1 symbols on its row to recover the primary fragment but also needs to utilize the secondary symbols sent by other honest recovery nodes to assist in completing the recovery. With these symbols received from other nodes, Node 4 can recover its primary fragment. To ensure the accuracy of the recovery, Node 4 will receive at least 2f + 1 valid secondary symbols (including the 1 symbol recovered in the previous step). This mechanism enhances fault tolerance and data recovery capability by integrating data from multiple sources.
- Reading Protocol
The reading protocol is the same as that of RS encoding, where nodes only need to use their primary fragments. The reader (R) first requests any node to provide the commitment set for the blob and checks whether the returned commitment set matches the requested blob commitment through the commitment opening protocol. Next, R requests all nodes to read the blob commitment, and they will respond by providing their held primary fragments (which may be done gradually to save bandwidth). Each response will be checked against the corresponding commitment in the blob's commitment set.
When R collects f + 1 correct primary fragments, R decodes the blob, re-encodes it, recalculates the blob commitment, and compares it with the requested blob commitment. If the two commitments match (i.e., they are the same as the commitment published by W on-chain), R outputs blob B; otherwise, R outputs an error or an indication that recovery is not possible.
Walrus Decentralized Secure Blob Storage
- Writing a Blob
The process of writing a blob in Walrus can be illustrated through Figure 4.
The process begins with the writer (➊) encoding the Blob using Red Stuff, as shown in Figure 2. This process generates sliver pairs, a set of commitments for the slivers, and a Blob commitment. The writer derives a blobid by hashing the Blob commitment along with metadata such as the file length and encoding type.
Next, the writer (➋) submits a transaction to the blockchain to secure sufficient guarantees for Blob storage space over a series of Epochs and registers this Blob. The transaction sends the size of the Blob and the Blob commitment, which can be used to re-derive the blobid. The blockchain smart contract needs to ensure there is enough space to store the encoded slivers on each node, as well as all metadata related to the Blob commitment. Some payments may be sent along with the transaction to guarantee free space, or free space can be used as an additional resource along with requests. Our implementation allows for both options.
Once the registration transaction is submitted (➌), the writer notifies the storage nodes that they are responsible for storing the slivers of the blobid, while sending the transaction, commitments, and the primary and secondary slivers allocated to each storage node along with proofs that the slivers correspond to the published blobid. The storage nodes will verify the commitments and return a signed confirmation for the blobid after successfully storing the commitments and sliver pairs.
Finally, the writer waits to collect 2f + 1 signed confirmations (➍), which constitute a write certificate. This certificate is then published on-chain (➎), marking the Point of Availability (PoA) for the Blob in Walrus. The PoA indicates that the storage nodes are obligated to maintain the availability of these slivers for reading within the specified Epochs. At this point, the writer can delete the Blob from local storage and go offline. Additionally, the writer can use the PoA as proof of the Blob's availability to third-party users and smart contracts.
Nodes will listen for blockchain events to check if the Blob has reached its PoA. If they do not have the sliver pairs for that Blob, they will execute a recovery process to obtain all commitments and sliver pairs for the Blob up to the PoA timestamp. This ensures that ultimately all correct nodes will hold all sliver pairs for the Blob.
Summary
In summary, Walrus's contributions include:
- Defining the problem of asynchronous complete data sharing and proposing Red Stuff, the first protocol capable of efficiently solving this problem under Byzantine fault tolerance.
- Introducing Walrus, the first permissionless decentralized storage protocol designed for low replication costs that can efficiently recover data lost due to failures or participant turnover.
- Introducing a staking-based economic model that aligns incentives and enforces long-term commitments through a combination of rewards and penalties, and proposing the first asynchronous challenge protocol for efficient storage proofs.
III. Industry Data Analysis
1. Overall Market Performance
1.1 Spot BTC & ETH ETF
From March 24, 2025, to March 29, 2025, the fund flows for Bitcoin (BTC) and Ethereum (ETH) ETFs exhibited different trends:
Bitcoin ETF:
- March 24, 2025: The Bitcoin ETF saw a net inflow of $84.2 million, marking the seventh consecutive day of positive inflows, with a total inflow of $869.8 million.
- March 25, 2025: The Bitcoin ETF recorded a net inflow of $26.8 million again, bringing the cumulative inflow over eight days to $896.6 million.
- March 26, 2025: The Bitcoin ETF continued to grow, with a net inflow of $89.6 million, marking the ninth consecutive day of inflows, totaling $986.2 million.
- March 27, 2025: The net inflow for the Bitcoin ETF was $89 million, maintaining the positive inflow trend.
- March 28, 2025: The Bitcoin ETF continued to record a net inflow of $89 million, sustaining the consecutive positive inflow trend.
Ethereum ETF:
- March 24, 2025: The Ethereum ETF had a net inflow of $0, ending a previous streak of 13 days of outflows.
- March 25, 2025: The Ethereum ETF experienced a net outflow of $3.3 million, marking the first outflow after the trend resumed.
- March 26, 2025: The Ethereum ETF continued to face a net outflow of $5.9 million, with investor sentiment remaining cautious.
- March 27, 2025: The Ethereum ETF had a net outflow of $4.2 million, indicating ongoing market panic.
- March 28, 2025: The Ethereum ETF continued to encounter a net outflow of $4.2 million, maintaining the negative outflow trend.
As of November 1 (Eastern Time), the total net outflow for the Ethereum spot ETF was $10.9256 million.
1.2. Spot BTC vs ETH Price Trends
BTC
Analysis
After failing to test the upper wedge (around $89,000) last week, BTC has entered a downward trend as expected. This week, users need to focus on three important support levels: the support at $81,400, the second support at the $80,000 round number, and the bottom support at this year's lowest point of $76,600. For users waiting for an entry opportunity, these three support levels can be seen as suitable points for phased entry.
ETH
Analysis
After failing to stabilize above $2,000, ETH is now approaching a pullback to this year's low near $1,760. The subsequent trend will largely depend on BTC; if BTC can stabilize above the $80,000 mark and initiate a rebound, ETH is likely to form a double bottom pattern above $1,760 and could rise to resistance around $2,300. Conversely, if BTC falls below $80,000 again and seeks support at $76,600 or even lower prices, ETH is likely to drop to around $1,700 or even $1,500 for secondary bottom support.
1.3. Fear & Greed Index
2. Public Chain Data
2.1. BTC Layer 2 Summary
Analysis
From March 24, 2025, to March 28, 2025, the Bitcoin Layer-2 (L2) ecosystem experienced some significant developments:
Increase in sBTC Deposit Cap for Stacks: Stacks announced the completion of the cap-2 expansion for sBTC, raising the deposit cap by 2,000 BTC, bringing the total capacity to 3,000 BTC (approximately $250 million). This increase aims to enhance liquidity and support the growing demand for Bitcoin-backed DeFi applications on the Stacks platform.
Milestone for Citrea's Testnet: The Bitcoin L2 solution Citrea reported a significant milestone, with transaction volume on its testnet surpassing 10 million. The platform also updated the Clementine design, simplifying the zero-knowledge proof (ZKP) verifier and enhancing security, laying the groundwork for the scalability of Bitcoin transactions.
BOB's BitVM Bridge Activation: BOB (Build on Bitcoin) successfully activated the BitVM bridge on its testnet, allowing users to mint Yield BTC from BTC with minimal trust assumptions. This advancement enhances interoperability between Bitcoin and other blockchain networks, enabling more complex transactions without compromising security.
Bitlayer's BitVM Bridge Launch: Bitlayer launched the BitVM bridge, allowing users to mint Yield BTC from BTC with minimal trust assumptions. This innovation improves the scalability and flexibility of Bitcoin transactions, supporting the development of DeFi applications within the Bitcoin ecosystem.
2.2. EVM & Non-EVM Layer 1 Summary
Analysis
EVM-Compatible Layer 1 Blockchains:
- BNB Chain's 2025 Roadmap: BNB Chain unveiled its vision for 2025, planning to scale to 100 million transactions per day, enhance security to address miner extractable value (MEV) issues, and introduce smart wallet solutions similar to EIP-7702. The roadmap also emphasizes the integration of artificial intelligence (AI) use cases, focusing on leveraging valuable private data and enhancing developer tools.
- Polkadot's 2025 Development: Polkadot released its 2025 roadmap, highlighting support for EVM and Solidity to enhance interoperability and scalability. The plan includes implementing a multi-core architecture to increase capacity and upgrading cross-chain messaging through XCM v5.
Non-EVM Layer 1 Blockchains:
- W Chain Mainnet Soft Launch: W Chain, a hybrid blockchain network based in Singapore, announced that its Layer 1 mainnet has entered the soft launch phase. Following a successful testnet phase, W Chain introduced the W Chain bridging feature to enhance cross-platform compatibility and interoperability. The commercial mainnet is expected to officially launch in March 2025, with plans to introduce features such as a decentralized exchange (DEX) and ambassador program.
- N1 Blockchain Investor Support Confirmation: N1, an ultra-low latency Layer 1 blockchain, confirmed that its original investors, including Multicoin Capital and Arthur Hayes, will continue to support the project, which is expected to launch before the mainnet release. N1 aims to provide developers with unrestricted scalability and ultra-low latency support for decentralized applications (DApps), supporting multiple programming languages to simplify development.
2.3. EVM Layer 2 Summary
Analysis
Between March 24, 2025, and March 29, 2025, several significant developments occurred in the EVM Layer 2 ecosystem:
- Polygon zkEVM Mainnet Beta Launch: On March 27, 2025, Polygon successfully launched the zkEVM (Zero-Knowledge Ethereum Virtual Machine) mainnet Beta. This Layer 2 scaling solution enhances Ethereum's scalability by executing off-chain computations, enabling faster and lower-cost transactions. Developers can seamlessly migrate their Ethereum applications to Polygon's zkEVM, as it is fully compatible with Ethereum's codebase.
- Telos Foundation's ZK-EVM Development Roadmap: The Telos Foundation announced a development roadmap for its ZK-EVM based on SNARKtor. The plan includes deploying hardware-accelerated zkEVM on the Telos testnet in Q4 2024, followed by integration with the Ethereum mainnet in Q1 2025. The subsequent phases aim to integrate SNARKtor to improve verification efficiency on Layer 1, with full integration expected by Q4 2025.
IV. Macroeconomic Data Review and Key Data Release Points for Next Week
The core PCE price index year-on-year for February, released on March 28, recorded 2.7% (expected 2.7%, previous value 2.6%), marking the third consecutive month above the Federal Reserve's target, primarily driven by rising import costs due to tariffs.
Key macroeconomic data points for this week (March 31 - April 4) include:
April 1: U.S. March ISM Manufacturing PMI
April 2: U.S. March ADP Employment Change
April 3: U.S. Initial Jobless Claims for the week ending March 29
April 4: U.S. March Unemployment Rate; U.S. March Seasonally Adjusted Non-Farm Payrolls
V. Regulatory Policies
During the week, the U.S. SEC concluded its investigations into Crypto.com and Immutable, and Trump also pardoned the co-founder of BitMex. A dedicated bill regarding stablecoins has also officially been put on the discussion agenda, accelerating the process of deregulation and compliance in the crypto industry.
U.S.: Oklahoma Passes Strategic Bitcoin Reserve Bill
The Oklahoma House voted to pass a strategic Bitcoin reserve bill. The bill allows the state to invest 10% of public funds in Bitcoin or any digital asset with a market capitalization exceeding $500 billion.
Additionally, the U.S. Department of Justice announced the dismantling of an ongoing terrorism financing scheme, seizing approximately $201,400 (at current value) in cryptocurrency, which was stored in wallets and accounts intended to fund Hamas. The seized funds originated from fundraising addresses allegedly controlled by Hamas, which have been used to launder over $1.5 million in virtual currency since October 2024.
Panama: Proposed Cryptocurrency Bill Announced
Panama announced a proposed cryptocurrency bill to regulate cryptocurrencies and promote the development of blockchain-based services. The proposed bill establishes a legal framework for the use of digital assets, sets licensing requirements for service providers, and includes strict compliance measures that meet international financial standards. Digital assets are recognized as a legitimate means of payment, allowing individuals and businesses to freely agree to use digital assets in commercial and civil contracts.
EU: Potential 100% Capital Support Requirement for Crypto Assets
According to Cointelegraph, EU insurance regulators have proposed implementing a 100% capital support requirement for insurance companies holding crypto assets, citing "inherent risks and high volatility" associated with crypto assets.
South Korea: Proposed Access Block for 17 Overseas Applications Including Kucoin
The Financial Intelligence Unit (FIU) of South Korea announced that starting March 25, it will implement domestic access restrictions on the Google Play platform applications of 17 unregistered overseas virtual asset service providers (VASPs), including KuCoin and MEXC. This means users will not be able to install the related applications, and existing users will also be unable to update them.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。