Dialogue with Solana founder Anatoly: How to build a moat for Solana?

CN
链捕手
Follow
11 hours ago

Original Title: What's Next For Solana | Anatoly Yakovenko

Original Source: Lightspeed YouTube Account

Translation: Ismay, BlockBeats

Last week, SOL broke through $248, reaching the second highest price since the FTX crash, just 4% away from the historical high of $259 on November 6, 2021. In this episode of the Lightspeed podcast, Solana Labs co-founder Anatoly Yakovenko and Helius CEO Mert discussed issues such as Solana's transaction fees, how to remain competitive in the cryptocurrency space, SOL inflation, competition with Apple and Google, and whether Solana has a moat.

Table of Contents

  • 1. Why are there still so many front-running transactions on Solana?
  • 2. Can L2 architecture really solve congestion issues?
  • 3. Can chains focused on single-point optimization really shake Solana's global consensus advantage?
  • 4. Is shared block space a "tragedy of the commons" or the key to DeFi capital efficiency?
  • 5. What is Solana's core competitive advantage?
  • 6. Will Solana's inflation rate decrease?
  • 7. Are the development costs of FireDancer too high?
  • 8. How does Solana compete with Ethereum?

1. Why are there still so many front-running transactions on Solana?

Mert: So, let's get started, Anatoly. One of the reasons you founded Solana was your frustration with front-running in traditional markets. You wanted to achieve global information synchronization at light speed through Solana, maximizing competition and minimizing arbitrage, but that hasn't been realized yet; everyone seems to be constantly facing front-running. Not only has MEV surged, but Jito's fees have, in many cases, exceeded Solana's priority fees. What do you think about this issue? Why is this happening?

Anatoly: You can set up a validator node yourself and submit your transactions without interference from others, right? In traditional markets, you don't have that option at all, which is where Solana's decentralization comes into play, and that part does work.

The problem now is that setting up a validator node isn't easy, and getting enough staking to achieve a significant position is also not simple. Finding other nodes willing to order transactions the way you want is even more difficult. But it is doable; it just takes time and effort. The current market isn't mature enough; there isn't enough competition, for example, between Jito and its competitors, so users can't easily choose, "I will only submit to order flow creator Y and not to order flow creator K."

From a fundamental functionality perspective, as an enthusiast, I can start my own validator node, get some staking, run my algorithm, and submit my transactions directly; no one can stop me, and that is achievable. The question now is whether we have matured enough for users to always choose the best way to submit transactions. I think we are far from that point.

In my view, the way to achieve this goal is actually quite simple but also very difficult: increase bandwidth, reduce latency, optimize the network as much as possible, and eliminate those bottlenecks that make the system unfair, including mechanisms like multiple parallel block leaders. If there is only one leader per time slot and you have 1% of the stake, you will have a chance about every 100 time slots. But if there are two leaders per time slot and you have 1% of the stake, you will have a chance about every 50 time slots. Therefore, the more leaders we can add, the less stake you need, and you can run your algorithm better with the service quality you need.

Someone created a website called the Solana Roadmap, which opens to show "Increase Bandwidth, Reduce Latency." Anatoly asked who made it.

Mert: The current situation is that you need to gather a certain amount of staking to prioritize your transactions, even if that is not the case. It seems that having more staking in the system not only benefits obtaining your own block space, etc., but there is a dynamic relationship here: the richer you are, the greater your advantage. Is that acceptable?

Anatoly: Performance improvements lower the threshold for honest participants to change market dynamics. If we have two leaders per second, the amount of staking required to provide the same service is halved, thus lowering the economic entry barrier, allowing more people to participate in competition, and they can say, "Hey, I am the best validator; you should submit all your Jupiter transactions to me, and I will get you what you want." This way, I can run a business and offer it to users, and competition will force the market to reach the fairest balance point. That is the ultimate goal.

But to achieve this goal, I think a significant difference between Solana and Ethereum is that I firmly believe this is just an engineering problem. We just need to optimize the network, increase bandwidth, for example, more leaders per second, increase block size, and everything grows until competition forces the market to reach an optimal state.

2. Can L2 architecture really solve congestion issues?

Mert: Speaking of engineering problems, the reason Jito's fees exceed priority fees is not just MEV, but also because the transaction landing process, or more accurately, the operation of the local fee market, does not always work deterministically and can sometimes be quite unstable. What is the reason for this?

Anatoly: This is still because the current transaction processing implementation is far from optimal. Under very high load, if the load is low, everything runs smoothly. During the mini bear market over the past six months, I have seen confirmation times of less than a second from start to finish; everything was running very smoothly because the number of transactions submitted to the leader was low, and those queues, fast connection tables, and other resources were not filled up, so there was no queue buildup due to performance bottlenecks.

When those queues build up, you cannot prioritize those queues before the scheduler, which effectively disrupts the local fee market. So, in my view, this is also an engineering problem, and it may be the area in the current ecosystem that needs the most engineering investment, which is to optimize those processing pipelines to the extreme.

Mert: Given the existence of these issues, it seems your answer is that these problems do exist, but they are engineering problems and therefore solvable, and future iterations will address them. Some might say that these problems do not exist in L2 due to its architecture, right? Because you can achieve first-come, first-served through a centralized sorter.

Anatoly: First-come, first-served will also lead to the same problems; even Arbitrum has priority channels. So, if you implement first-come, first-served, it will encourage junk transactions, which is the same issue. If you have a generic L2 that supports multiple applications, it will ultimately encounter the same problems.

One could argue that because L2 does not have the same consensus and vertically integrated ecosystem as Solana, they can iterate faster, like a Web2 company that can push a new version every 48 hours and quickly fix issues through a centralized sorter. But they still face the same problems as Solana.

You could say that Jito does have the opportunity to solve these problems because their relayers can update every 24 hours or continuously release updates. Regardless, what they are not doing now is that they are not scheduling and filtering data sufficiently to keep the traffic output from these relayers within the range that the validator scheduler can handle, but you can achieve a similar effect.

So, I don't think L2 itself can solve these problems; L2 is only effective when you launch it with a single popular application and no other applications exist. And this doesn't even apply to the application itself; if you have an application with multiple markets, congestion in market A will affect all other markets.

3. Can chains focused on single-point optimization really shake Solana's global consensus advantage?

Mert: Let's look at it from another angle. If this is not a generic L2 but a chain focused on DeFi like Atlas, which runs SVM, how does Solana compete with such a chain? Atlas doesn't have to worry about the overhead of consensus or shared block space issues and can focus on DeFi optimization, and it can also achieve free markets through SVM.

Anatoly: What you are talking about is actually Solana's competitiveness within a smaller cluster of validator nodes. In this case, there is only one node, making it easier to optimize, and you can use larger hardware. That is the core issue: is synchronous composability important at scale? This smaller network can only cover the area where that hardware is located, so information still needs to be transmitted globally. Ultimately, Solana has multiple validators that can globally synchronize transaction submissions and is permissionless and open.

If this problem is solved, the end result is Solana. Whether or not data is submitted to L2, the key issue is how to synchronize information globally and reach consensus quickly. Once you start addressing this issue, it is no longer something that can be solved by a single machine in New York or Singapore; you need some form of consensus, consistency, and linearization. Even if you later rely on L2 for stricter settlement guarantees, you will still face the current issues of Solana. So, in my view, these single-node SVMs are basically no different from Binance.

How to compete with Binance is a more interesting question. If you have a choice, you can use SVM, but users will ultimately prefer to use Binance because it offers a better user experience. Therefore, we need to be the best version of a centralized exchange. And the only way to achieve this is to embrace the idea of a decentralized multi-proposer architecture.

Mert: Another advantage is that Solana itself must solve these problems, while through L2 they can resolve these issues faster. It is easier to solve problems on a single box than on 1,500 boxes. In that case, they will gain more attention and accumulate network effects from the start. Regardless of what Solana does, it needs to solve these problems, and because they use the same architecture, they can learn from it and possibly release updates faster.

Anatoly: The issue of competition at the business level is whether these single boxes can survive when they reach a certain load. Building a single box does not immediately solve all problems; you will still encounter almost the same engineering challenges, especially when you consider that the discussion is no longer about Solana's consensus overhead but about the transaction submission process.

The transaction submission pipeline itself can be centralized on Solana, just like in some L2s. In fact, Solana uses a single relayer that receives a large number of transactions and then attempts to submit them to the validators. The data transfer rate between the relayer and the validators can be limited to a lower level, ensuring that validators can always process these transactions smoothly.

Moreover, such a design allows components like Jito to iterate at a faster pace. Therefore, I believe the advantages of this design in L2s are actually smaller than people imagine.

4. Is shared block space a "tragedy of the commons" or the key to DeFi capital efficiency?

Mert: If we broaden the discussion, Solana, as an L1, shares block space, which leads to the "tragedy of the commons" problem, similar to the misuse of public pool resources. In L2s, especially those that are not necessarily application chains, developers can have independent block space without sharing with others.

Anatoly: This independence may be more attractive to application developers, but it requires a permissioned environment. Because once permissionless validators or sorters are adopted, control will be lost when multiple applications run simultaneously.

Even in a single application environment, like Uniswap, if there are multiple markets on the platform, these markets may interfere with each other. For example, an obscure meme token might affect the order priority of mainstream markets. If we look at it from a product perspective, imagine that in the future all assets are tokenized, and as the CEO of a newly minted unicorn company, I decide on which platform to conduct an IPO. If I see that the trading volume of SHIB on Uniswap is causing severe congestion to the point where mainstream assets cannot be traded normally, this would undoubtedly be a failure for this application-focused L2.

Therefore, the challenges faced by these application-focused L2s are similar to those of Solana; they need to isolate their states in a way that does not affect other applications. Because even for a single application like Uniswap, if one of its markets causes congestion that affects all other markets, then for a CEO of an IPO company like me, such an environment is unacceptable. I do not want my main market to be one where everyone is trading. I want each trading pair to operate independently.

Mert: What if it is permissioned? Since there is an exit mechanism, can't it work?

Anatoly: Even in a permissioned environment, the local isolation problem still needs to be addressed. Solving this isolation problem in a permissioned environment is not fundamentally different from solving it in a relayer or scheduler.

Mert: Do you think this market analogy can be mapped to any type of application?

Anatoly: Some applications do not have these characteristics, such as simple peer-to-peer payments, which have little congestion and are very easy to schedule. So the challenge of designing isolation mechanisms and all these seemingly complex things lies in the fact that if you cannot guarantee that a single market or application will not cause global congestion, then companies like Visa will launch their dedicated payment L2s because their transactions never face competition. They do not care about priority; what they care about is TPS. Whether my card transaction is the first or the last in the block does not matter; what matters is that I can leave within two and a half seconds after swiping my card. So in payment scenarios, the priority mechanism is not key, but it is indeed a very important practical application scenario.

My view is that if we cannot correctly implement isolation mechanisms, then the idea of large composable state machines loses its meaning because you will see payment chains and single market L2s emerge. If I am the CEO of an IPO company, why would I choose to launch on Uniswap's chain in the next 20 years? Why not launch my own L2 that only supports my trading pairs to ensure good performance?

This is a possible future, but I see no reason to do so from an engineering perspective unless there are other reasons. If we can solve the engineering problems, then I believe achieving composability in a single environment has huge advantages because the friction of capital transfer between all states and liquidity can be greatly reduced, which is a very important feature for me. I believe Solana's DeFi can survive the bear market and has suffered more than anyone else precisely because of its composability, which enhances its capital efficiency.

Mert: Vitalik recently stated, "In my opinion, synchronous composability is overrated." I think he probably reached this conclusion based on empirical data, believing that there are not many instances of it being used on-chain. What do you think?

Anatoly: Isn't Jupiter a typical example of synchronous composability? I think he is only focusing on Ethereum, and unfortunately, Jupiter has a huge market share on Solana and also a large market share in the entire crypto space. To achieve synchronous composability, Jupiter is indispensable. Without synchronous composability, Jupiter cannot operate. Look at 1inch; it is a competitor on Ethereum that cannot scale because even the costs of transferring between L2 and the same L1 are extremely high and slow.

I think he is wrong. I believe the financial system can be asynchronous, which is also how most financial systems currently operate. It does not mean that these systems will fail or collapse. But if Solana succeeds and the ecosystem solves all these problems at the current pace, even if we only maintain the existing execution level each year, you will see significant improvements. Ultimately, I believe synchronous composability will be the winner.

5. What is Solana's core competitive advantage?

Mert: Let's temporarily set aside engineering issues and assume that engineering is not a moat; other chains can achieve the same results. For example, chains like Sui can also achieve synchronous composability and have a smaller set of validators. Assuming some L2s also face similar issues as you mentioned, but they can also solve these problems. I previously asked you, when engineering is no longer a moat, what is the moat? You said it is content and functionality.

Anatoly: Yes, Solana has not set specific validator targets. The testnet has about 3,500 validators, and the mainnet is also large because I want it to be as large as possible to prepare for the future of the network. If you want as many block producers in the world as possible, you need a large set of validators that allows anyone to enter and participate in every part of the network without permission.

You should test at the highest rates possible because the cost of solving these problems is currently very low. Solana is not handling trillions of dollars in user funds; that is what Wall Street does. Solana is dealing with cryptocurrency, which gives us an opportunity to bring the smartest people in the world to solve these challenges and force them to face these problems.

So my point is that rather than Solana reducing the size of the validator set for performance, Sui and Aptos are more likely to need to increase their validator sets. If you find PMF, everyone will want to run their own nodes because it provides assurance. As the validator set grows, if you start limiting participants, it will restrict the network's scalability.

Mert: Okay, you mentioned a question I want to discuss. While this is the goal, if you look at the data, the number of validator nodes has been decreasing over time. It seems you believe this is due to a lack of product-market fit, so they have no motivation to run their own node devices, right? Or what is the reason?

Anatoly: Yes, part of the reason is some staking support from the Solana Foundation. But I am indeed interested in understanding how many validator nodes can sustain themselves; is that number increasing?

Mert: Hold on, we have about 420 self-sustaining validator nodes.

Anatoly: But what was it like two years ago?

Mert: We might not have that data. But we do know that the total staking amount from the Solana Foundation has decreased significantly since two years ago.

Anatoly: At the same time, fees are also increasing. So my guess is that the number of nodes that could sustain themselves two years ago was much lower, even though the total number of nodes was larger back then. So my point is that we need the network to scale to support everyone who wants to run node devices. That is also one of the main purposes of the delegation program, which is to attract more people to participate in running nodes and to conduct some stress testing on the testnet.

But the testnet can never fully simulate the characteristics of the mainnet, right? No matter how many validators they run in the testnet, the situation on the mainnet will still be very different. So as long as the number of self-sustaining nodes is increasing, I think that is a positive trend. The network must be able to physically scale to support such a scale; otherwise, it will limit growth.

Mert: So basically, you mean the delegation mechanism helps the network conduct stress tests on different validator node scales, but fundamentally, the only important thing, or the most important thing, is the number of self-sustaining validator nodes.

Anatoly: Exactly, this can theoretically raise some arguments, such as in an extreme case where a single node might not be self-sustaining. But even in a catastrophic failure, if that is the only surviving node, it is indeed helpful. But that falls under the endgame "nuclear war decentralization" issue.

Fundamentally, what really matters is whether the network is growing and succeeding, which relies on self-sustaining validators who can pay their own bills and have enough interest in the network to invest commercial resources to continuously improve, dig deep into the data, and do their job well.

Mert: In a scenario where anyone can run a fast, low-cost, permissionless system, why would people still choose Solana?

Anatoly: I believe the future winner may be Solana because this ecosystem performs excellently in execution and is already ahead in solving all these problems. Alternatively, the winner could be a project that is completely similar to Solana, and the only reason it is not Solana is that it executes faster, enough to surpass Solana's existing network effects.

So I believe execution is the only moat. If execution is poor, you will be surpassed by others. But the surpassing party must perform exceptionally well to become a killer product. PMF refers to product-market fit, which means it will lead to a shift in user behavior.

For example, if transaction fees are ten times cheaper, will users switch from Solana to other projects? If users only have to pay half a cent, that may not necessarily be the case. But if switching to another place can significantly reduce slippage, that might be enough to attract them or traders to switch.

Yes, it is important to observe the overall behavior of users to see if there is some fundamental improvement that is sufficient to make them choose another product. There is indeed a distinction between Solana and Ethereum. For users, when they sign a transaction and see that they need to pay a fee of $30 to receive an ERC-20 token, even for a very basic state change, this price is outrageous and exceeds their expectations, leading them to choose cheaper alternatives.

Another factor is time; you cannot wait two minutes for a transaction to be confirmed; that is too long. Solana's current average confirmation time is about two seconds, sometimes reaching up to 8 seconds, but it is moving towards 400 milliseconds, which is a huge behavioral change incentive for users to be willing to switch to a new product.

But this is still unknown. However, there are no barriers in Solana's technology preventing the network from continuing to optimize for lower latency and higher throughput. Therefore, when people ask why Solana is growing faster than Ethereum, some may feel that the next project will surpass it, but in fact, the marginal gap between Solana and its next competitor is very small, making it a significant challenge to create enough differences to influence user behavior.

Mert: If execution is the main factor, then fundamentally, this becomes a matter of organization or coordination. One distinction between Solana's vision and the so-called modularity (though this is not a formal term) is that, for example, if you are an application developer like Drip building on Solana, you need to wait for L1 to make some changes, such as addressing congestion issues or fixing bugs.

But if you are on an L2 or an application chain, you can directly address these issues yourself. Perhaps from this perspective: on these other chains, you might be able to execute operations faster rather than relying on shared space. So if this is true, then the overall execution speed would be faster.

Anatoly: Over time, this difference will gradually narrow. For example, Ethereum used to be very slow. If you were running Drip on Ethereum and transaction fees skyrocketed to $50, you would go ask Vitalik (the founder of Ethereum) when this issue could be resolved. He might respond, "We have a six-year roadmap, brother; this will take some time." But if you ask teams like Fire Dancer or Agave, they would say, "There is already a team working to fix this issue and resolve it as quickly as possible in the next version."

This is a cultural difference. The core L1 team, along with the entire infrastructure team, including you, is clear that when the network slows down or there is global congestion, this is the most urgent (p0) issue that everyone needs to address immediately. Of course, sometimes unexpected issues arise, such as adjustments to the fee market design.

These issues become less common as the scale of network usage gradually expands. I do not believe there are currently urgent design changes needed that would take six months to a year to deploy. I do not see challenges like that appearing on the road ahead.

However, you know there will definitely be some bugs or other unforeseen issues at launch that require people to work overtime on weekends; that is part of the job. If you have your own dedicated L2 application chain, do not need to share resources, and have complete control over that layer of infrastructure, then you might move faster, but that comes at a high cost and is not affordable for everyone.

Therefore, a shared, composable infrastructure layer may be cheaper and faster for the vast majority of use cases, serving as a shared and usable software-as-a-service infrastructure layer. As bugs are fixed and improvements are made, this gap will continue to narrow.

6. Will Solana's inflation rate decrease?

Mert: Another related criticism is Sol's inflation mechanism, with many believing it helps more validators by increasing rewards. However, the cost of doing so may come at the expense of pure investors' interests. When people say Solana's inflation rate is too high, what is your first reaction? How do you view this?

Anatoly: This is an endless debate; changing the numbers in a black box does not really change anything. You can make some adjustments that affect certain people to the point where the black box cannot operate normally, but that in itself does not create or destroy any value; it is merely an accounting operation.

The inflation mechanism is as it is now because it directly replicates the mechanism of Cosmos, as many of the initial validators were Cosmos validators. But does inflation affect the network as a whole? It may have an impact on individuals under a specific tax regime, but for the entire network, it is a cost to non-stakers and an equivalent benefit to stakers, which mathematically adds up to zero. So from an accounting perspective, inflation does not affect the network as a whole black box.

Mert: I have seen people say that since it is arbitrarily set, why not just lower it?

Anatoly: Go ahead, publish a proposal; I personally do not care. I have said countless times, change it to whatever value you want and convince the validators to accept it. The main consideration when these numbers were initially set was to avoid causing a complete disaster, and since Cosmos has not had issues because of this setting, it is considered reasonable enough.

7. Are the development costs of FireDancer too high?

Mert: Now let's return to the challenges of coordination. We have been promoting FireDancer recently, and Jerry mentioned that some people now feel FireDancer is a bit overrated. But Jerry also said that FireDancer has indeed slowed down progress because Anza engineers and Fire engineers clearly need to reach consensus on certain matters before they can move forward, so there will indeed be some delays at the beginning. Your point seems to be that once the specifications and interfaces are sorted out, the iteration speed will accelerate, right?

Anatoly: Yes, it can basically be divided into three steps: the first step is the design phase, where you need to reach consensus on what to do; next is the implementation phase, where both teams can work in parallel; then comes the testing and validation phase to ensure that no security or liveness issues arise. I think the design phase may take more time, but implementation is done in parallel, so both teams can complete their work simultaneously, and the testing and review phase will be faster because the probability of two independent teams releasing the same bug is lower.

I think the biggest difference is that Ethereum typically operates like this: we release a major version that includes all the features targeted for that release, and they focus on the feature set rather than the release date. In contrast, Solana operates almost exactly the opposite; it sets a release date, and if your features are not completed, they will be cut, making the release rhythm much faster.

Theoretically, if there are two teams wanting to accelerate iteration speed, the iteration cycle could be further sped up. But this requires core engineers to have a sense of urgency, feeling that "we need to release this content as soon as possible within a reasonable range." In this case, certain redundancy implementations can be relied upon. I believe, culturally, both teams have similar backgrounds because they are not academic teams but have grown up in a technical pressure cooker.

Mert: This leads to the third point I want to mention: FireDancer. You must assume that you have no execution capability because you are working on a phone rather than helping with L1 development or coordinating these client teams. Is this really the optimal choice for individuals?

Anatoly: The last major change I was involved in with FireDancer was moving the account DB indexing out of memory. At that time, I could write a design proposal and a small implementation to prove its feasibility, but the problem was that to complete this work, a full-time engineer needed to focus on this task. I could assign this task to Jash, who would be responsible for implementing it, but including testing and release cycles, the entire process would take a year.

For me, if I could join Anzar Fire Dancer as a pure individual contributor (IC), just focusing on monitoring Grafana (performance monitoring tool) and developing something, that would be fantastic. But the reality is that my attention is divided among countless projects. So, I find that the place where I can have the greatest impact is in defining the state of problems, such as growth issues, concurrent leader issues, review issues, MEV competition issues, etc. I can propose solutions and discuss them with everyone, and ultimately, everyone agrees that my problem analysis is correct and presents their possible solutions. We iterate on the design together, and eventually, it takes shape and solidifies.

Then, as the sense of urgency I foresee gradually intensifies, people already have the design proposal. The most difficult part—the consensus between the two teams—has been completed, and what remains is just implementation and testing. So, my role is almost like a Principal Engineer in a large company. I do not write code; instead, I communicate with multiple teams, saying, "I noticed you are having difficulties in this area, and other teams are too. You need to solve the problem this way so that we can reach consensus on this matter." This is probably the opportunity where I can have the greatest impact in core areas.

Mert: This is indeed part of the job's responsibilities, but it is by no means easy. So, are you saying, "Jack Dorsey can do it, Elon Musk can do it, so I can also do these things while developing a phone"?

Anatoly: Actually, it is not like that. There is an outstanding engineer who is the head of mobile, a close friend of mine for over ten years, who has been involved in building BlackBerry, iPhone, and almost every phone you can think of. There is also a very excellent general manager; these two work together to manage the entire team, while I am responsible for setting the vision.

I do not think people fully understand this vision, but if you look at Android or iOS, they are essentially a cryptographically signed firmware specification that defines the entire platform. Everyone has such a device, and it ensures security through trusted boot. When you receive a firmware update, it verifies the correctness of the firmware signature and rewrites the entire phone system.

The most critical part of this is that cryptographic signature, which could entirely be generated by a DAO that signs the entire firmware and is responsible for its release. Imagine if Apple's own cryptographic signature certificate were controlled by a DAO; the entire concept of the software platform would be disrupted. This is that extremely cool yet somewhat strange "hacker" mindset.

Aside from that, my main job is to set such a vision, drive the team to sell more phones, make it a truly meaningful product, and ultimately achieve the milestone where the entire ecosystem can control its firmware. I do not participate in the day-to-day execution work.

Regarding Elon Musk, I think his way of working might be like this: he has a grand idea and then finds an engineer who can convincingly tell him, "I can implement the entire project from start to finish." If you can find such a person, then all you need to do is provide funding to accelerate the process. After giving that person the funds, they will complete the entire project themselves and then hire others to speed up their progress.

I try to operate in this way, not sure if it is the same as Elon’s approach, but I believe it is a method to handle multiple projects simultaneously: having a grand vision, a very specific goal, and then finding someone truly capable of achieving it. If time were unlimited, I could build every part. And after funding them, they would accelerate the realization of everything.

Mert: You mentioned that the vision is clear, but the ideal outcome seems to be this: suppose you succeed in selling a large number of these phones, even making a breakthrough impact on crypto Twitter and Apple. Then, Apple might lower their fees. In other words, what you are doing changes the world.

Anatoly: It indeed brings about change; software companies in the Midwest no longer have to pay Apple a "ransom" of around 30%, allowing for the development of more efficient software and games, which is indeed a good thing.

Mert: But this feels more like an altruistic effort rather than a business endeavor, right?

Anatoly: It can only truly be realized if this altruistic act can also succeed as a business endeavor. If Apple is to lower their fees, they must feel competitive pressure from a growing and commercially viable ecosystem. Otherwise, they will just delay until that ecosystem fades away due to a lack of commercial viability. Therefore, this ecosystem must find its product-market fit and have the ability to sustain itself.

But that does not mean it won't change the world. If it can reduce Apple's revenue share, that is the essence of capitalism: when you see a group extracting rent at a 30% fee, and you provide the same service at a 15% fee, you change the market economics, benefiting everyone, including consumers.

Mert: So, what you mean is that you must believe you can actually defeat one of the two largest companies in the world, Apple and Google, in some sense. So why do you think you can compete with them?

Anatoly: Clearly, a 30% revenue share is indeed too high, as people like Tim Sweeney are suing them everywhere; this has become a pain point for businesses using Apple and Google’s distribution channels. Apple and Google collect rent this way, and consumers do not care about these fees because they are hidden from them. Consumers pay a fixed amount for the app, and Apple takes 30% from that.

Solving this problem is a challenge in network building, and I believe the crypto space has an advantage in this regard. Cryptographic technology can financialize digital assets and scarcity in a way that is different from Web 2. But even so, it could still fail. The reason for failure is not that app developers do not want lower fees; that is clearly the case. The reason for failure is that we have not yet found a way to leverage the incentives provided by cryptographic technology to scale the network.

This is a truly tricky problem; it is not a product issue or a business model issue, but rather a question of how we can force users to change their behavior and switch to other networks.

8. What does Solana rely on to compete with Ethereum?

Mert: Changing the topic, I want to talk about ZK-related issues. The ultimate vision of blockchain seems to be that everything is driven by ZK, where you do not need to execute all operations on a full node, just verify the proof. However, Solana does not seem to have similar plans.

Anatoly: If you have read my article on APE (Asynchronous Processing Execution), you will find that it significantly changes how validators operate. By sharing a common prover, validators can verify the state. Therefore, you can have multiple validators sharing a trusted environment (like TE) or some trust model, even using ZK solutions. When APE completes the entire asynchronous execution and calculates the complete snapshot hash, you can actually realize this idea—a rollup that is completely based on ZK verification. This does not mean that a rollup is needed, or that rollups are incompatible with Solana in some way.

This viewpoint is absurd; asynchronous execution allows you to calculate the snapshot hash based on your own trust model, regardless of what environment you use, whether running your own full node, sharing a TE environment, or other environments; none of this affects my full node. If I run my own full node, you can use any environment and do what you want.

The core question is, what is the difference between Solana and Ethereum and ZK? For the survival of the network, it must have commercial viability, meaning it needs to be profitable. In my view, the only business model for L1 is priority fees, which is essentially the same as MEV. Rollups generate their own MEV, perform independent sequencing, and compete with L1, creating a parasitic competitive environment for L1.

All this competition is good, but it does not belong to the Solana ecosystem. Those rollups are based on EVM, leveraging the power of open source to accelerate development everywhere, while those in the Solana ecosystem are based on SVM.

In my view, this is the fundamental difference between ZK applications on Solana and Ethereum. Light protocols are great because, on Solana's mainnet, sorting is done by Solana's validators.

Mert: Let’s take a very theoretical example and completely reverse the assumption: suppose bandwidth has been maximized, latency minimized, and Moore's Law has been fully realized. Even in a saturated channel, just adding some extra hardware can solve the problem. If we really achieve these but still find it insufficient, what then? Suppose cryptographic technology really gains more popularity (though I personally think it may not happen, but let’s assume it does), what would happen?

Anatoly: Well, you cannot start another network because Solana's full nodes have completely saturated the ISP's bandwidth; every ISP has no more capacity left; we have consumed all available bandwidth.

Mert: I guess all engineering problems need to be solved before complete saturation.

Anatoly: It needs to be recognized that almost everywhere in the world, 1 Gbps network speeds are available, and almost every phone has this capability, which equates to processing 250,000 transactions per second (TPS). Even according to the current efficiency specifications of the Turbine protocol, this configuration can support 250,000 TPS. This is an astronomical figure, an absurdly large capacity. Let’s saturate this first, and then we can discuss other issues, like the limits of Moore's Law.

But for now, Solana is still 250 times away from that load increase to reach that point. We need a 250-fold improvement before we can start considering other issues. And this so-called 1 Gbps is a 25-year-old technology standard, a very mature technology.

We are not even close to saturating this technological capability. I believe that when we reach complete saturation at 1 Gbps bandwidth, when Turbine is fully saturated, that will be the scenario that the FireDancer team has already demonstrated in a lab environment. Of course, this environment is still distributed, but essentially it is a lab environment, and this is indeed achievable.

However, to make this technology commercially viable, there are still many issues to resolve, and applications need to effectively utilize this capability. Because currently, most of the load on Solana comes from market activities, first reaching saturation, and then arbitrage fills the remaining block space. But this has not yet reached what I call "economic saturation."

Mert: In an environment where Ethereum has higher quality assets and higher transaction volumes due to existing liquidity effects, how does Solana compete? Suppose these assets, even stablecoins, do not reach Ethereum's level; what needs to change?

Anatoly: We can start calling Ethereum's assets "Legacy Assets," and then let all the new things launch on Solana. This meme needs to change; the new version is that Ethereum is the platform for "Legacy Assets," while Solana is the birthplace of new things.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink