From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

CN
10 days ago

Abstract

Robots and artificial intelligence are no longer confined to science fiction—they are rapidly becoming an integral part of modern life. Thanks to breakthroughs in large language models (LLMs), machines can now understand context and learn autonomously, giving rise to an "economy centered around robots." In this new paradigm, autonomous systems undertake various tasks from local delivery to large-scale logistics, and even conduct financial transactions.

As the autonomy of AI agents increases, establishing trust mechanisms becomes crucial. Verifiable AI and zero-knowledge machine learning (zkML) provide solutions to verify their accuracy and integrity through cryptographic proof techniques without exposing the internal logic of the models. Polyhedra, as a pioneer in this field, deeply integrates such technologies with AI-oriented infrastructure (like EXPchain), enabling robots to achieve secure collaboration on-chain. This builds a robust ecosystem for the transparent autonomous operation of intelligent machines, and the once-exclusive futuristic vision of science fiction is accelerating into reality.

The Robot Revolution in the Real World

Many consider ChatGPT a milestone for humanity, as these human-created large language models can communicate and think like humans. When we equip LLMs with tools like search engines, web browsing, and APIs, they can operate these tools as humans do. Imagine if ChatGPT had a physical body and became your neighborhood companion—how would the world change?

This is already happening. With the development of generative AI, robots are beginning to exhibit human-like interaction capabilities. A prime example is the robot "Er Bai" from Yushu Technology: in one experiment, it persuaded (or "kidnapped") ten other robots equipped with generative AI to collectively escape the exhibition hall and venture into the free world.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

Robots from science fiction are gradually entering reality. Take the "Star Wars" series as an example: since 2019, visitors to Disneyland's "Droid Depot" have been able to assemble their own R2-D2 and BB-8 robots to take home—though these are still just remote-controlled toys and do not yet feature generative AI. However, the call for change has already sounded: at the GTC 2025 conference, NVIDIA announced a collaboration with Google DeepMind and Disney to create a physical engine Newton, which enables real-time complex motion interactions. Jensen Huang showcased a "Blue" BDX robot from Star Wars, whose lifelike movements were astonishing. These BDX robots are expected to debut at Disneyland during this year's "Force Season" event.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

While we still cannot travel at light speed or jump to hyperspace like in "Star Wars," the fantastic stories about robots and bionic machines are no longer just fantasies for movie fans. Perhaps in the near future, we will frequently encounter robots in our daily lives: they will navigate city streets, ride buses and subways with us, visit charging stations like humans going to restaurants, and even "stroll" through shopping malls to catch free WiFi. Let us continue this imaginative journey about the future, as these scenarios are very likely to become reality soon.

The Critical Point Has Arrived

So, what is the core driving force behind all this progress? In fact, robots, especially humanoid robots, are not a new concept. As early as 2005, Boston Dynamics developed a quadruped robot named BigDog, primarily for military operations in complex terrains. In 2013, they introduced the humanoid robot Atlas, designed for search and rescue missions, which received funding from the U.S. Defense Advanced Research Projects Agency (DARPA). Despite these impressive innovations, finding a product positioning that fits market demand has always been a challenge, leading Boston Dynamics to fail to achieve profitability. For example, the robotic dog Spot, released to the public in 2016, had a price tag of $75,000; in contrast, the average annual cost of keeping a real dog in American households is only $2,000 to $3,000. Faced with a gentle and adorable pet versus an expensive, cold metal machine, the choice for most families is clear.

Another example is the Colorado-based Sphero company, which had a licensing agreement with Disney to produce the popular "Star Wars" robots R2-D2 and BB-8. However, in 2018, these products were announced to be discontinued, primarily due to the rapid decline in movie popularity after theatrical releases, making the business model unsustainable. This is not surprising, as these robots were essentially remote-controlled toys operated via smartphone apps, lacking true intelligence or voice recognition capabilities. Additionally, their battery life was only about 60 minutes, and their operational range was limited to the vicinity of the charging dock. Clearly, these products were still far from the advanced autonomous robots depicted in the "Star Wars" films.

The current situation is vastly different.

First, the focus of robot development has shifted from being research-driven and reliant on government funding to being market-driven, emphasizing a high degree of alignment between products and market needs. About 15,000 years ago, humans began to domesticate wolves into dogs; these primitive dogs, while not as docile and adorable as modern pets, were already able to provide tangible assistance to the hunter-gatherers of that time. It was this practicality that fostered a millennia-long "co-evolution" relationship that continues to this day. Robots are no exception—if they are to achieve widespread adoption, they must also meet broad and practical use cases.

For instance, autonomous driving technology is gradually being applied in transportation and delivery sectors—Tesla recently obtained a ride-hailing operating permit in California; Meituan has been operating drone delivery regularly in Shenzhen since 2022; moreover, various hotel and restaurant service robots are now widely used in China, efficiently handling tasks like food delivery and room service, a trend that accelerated during the pandemic due to widespread labor shortages.

Second, the prices of robots and bionic robots have significantly decreased, making them more affordable and reasonable for ordinary families and businesses. This price drop is primarily due to the continuous reduction of technological barriers, increased market competition, and the advancement of large-scale production.

Several large tech companies in China, such as Baidu and Alibaba, have been actively investing in the autonomous driving sector, particularly in robotaxi services. Currently, robotaxis are operating regularly in multiple cities across China, and Baidu's “Luobo Kuaipao” plans to expand its services to Hong Kong and Dubai. In the U.S., Tesla recently unveiled the Cybercab, a self-driving taxi model expected to be priced under $30,000. Baidu has also provided similar pricing expectations and noted that "mass production" is key to achieving cost reductions. If a robotaxi generates about $22 in revenue per hour, its initial investment could be recouped in less than nine months.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

Other types of robots are also benefiting from large-scale production and increasingly fierce market competition. On Alibaba's platform, you can now find delivery drones priced under $3,000, while hotel and restaurant service robots are often priced below $5,000. Although software development still constitutes a significant portion of the total cost, this cost is being continuously diluted with the advancement of large-scale production, and its proportion in the overall price is gradually decreasing.

The third point, and the most disruptive change, is that today's robots finally possess "true intelligence." The fundamental difference between this generation of robots and those of the past is that they can autonomously complete complex tasks without human remote control. For example, the BB-8 mentioned earlier is more like a toy because it requires user remote operation even for basic turning movements. The existence of remote control fundamentally alters the essence of what defines a robot: if it must be operated by a human, it is not a true "robot," but merely another type of machine operated by humans. Imagining a robot that helps you clean your house sounds appealing, but if you still have to spend an hour controlling it to dust up and down, that appeal quickly fades.

In fact, humanity's desire for machine intelligence has existed for a long time, even predating Microsoft's release of the Windows system in 1985. I recently revisited Disney's 1982 sci-fi film "Tron," where human users interact with programs that exhibit anthropomorphic behavior. Even from today's perspective, this film is still highly technical, filled with a geeky atmosphere, frequently using terms like "end of line," "user," "disc," and "I/O," many of which would still feel unfamiliar and confusing to many today.

What is impressive is that the programs in TRON do not rely on human remote control; they possess the ability to act autonomously. For example, the program character Tron, in the absence of the user Alan Bradley, is able to independently persuade another program to betray the master control program MCP, allowing him to enter the I/O tower to receive data from the user and ultimately use that data to destroy MCP and save the world. In the film, these programs can not only express emotions (including love for others) but also demonstrate respect and belief in the user.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

The autonomous decision-making ability of robots holds immense potential. Take the example of a robotaxi: with this intelligence, it can not only drive itself and accept passenger requests but also determine when it needs to charge and automatically find the nearest charging station; when the "body" needs cleaning, it can make a judgment just like a human knows when to take a shower; it can even recognize if a passenger has left behind items and return them to the owner. These advanced functions far exceed basic autonomous driving capabilities and are essential prerequisites for the large-scale deployment of robots. Otherwise, we would still rely on “patchwork solutions”—for example, having human operators monitor 10 to 20 surveillance screens and manually intervene in case of anomalies.

When robots begin to think like humans, they will also possess learning abilities similar to those of humans—potentially no longer relying on direct human supervision. For instance, if you have a "pet robot," you might initially want it to jump around like a dog, bringing joy to its owner. But if it possesses human-like intelligence, it might learn new skills by watching tutorial videos on platforms like YouTube or TikTok. Perhaps one day, it will actively start helping you fold clothes—this would not be surprising.

A New Economic Model Dominated by Robots

It is foreseeable that robots will soon integrate into human society as autonomous entities, ultimately becoming consumers, clients, and users just like us. Imagine a self-driving car that can autonomously pay for parking or charge itself; a hybrid vehicle that swipes a card to refuel at a gas station; or even a delivery drone that chooses to take a train or subway to save time and costs. And the ones providing these services could very well be other robots!

This scenario reminds me of the animated film "Cars," released by Pixar and Disney in 2006. In the film, the Italian race car Luigi runs "Luigi's Tire Shop"; the female character Flo manages the gas station "Flo's V-8 Café"; and Sally, the Porsche, is not only the town's lawyer but also owns the "Cone Motel." Each vehicle has its own character and profession, and they all live together in a community called "Radiator Springs." Today, the continuously emerging technology is already sufficient to bring such a world from animation into reality.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

In marketing, sales, and business, we often discuss classic interaction models such as B2B (business to business), B2C (business to consumer), C2B (consumer to business), and C2C (consumer to consumer). However, with the rapid development of machine intelligence, it is fascinating that the way certain products and services are provided in our society may gradually shift towards new interaction models, such as B2R (business to robot), R2R (robot to robot), or R2C (robot to consumer)—in these scenarios, robots begin to take on roles traditionally held by businesses or consumers, but in slightly different ways.

For example, future subway stations might establish "drone-only lanes" specifically designed for drones descending from the air. These drones would not need to swipe tickets or scan commuter cards but would be recognized and allowed to pass through via RFID signals. Train carriages might also have dedicated compartments or seats for drones to park (based on physical principles, you cannot expect drones to fly around inside subway carriages); these seats might even be equipped with pay-per-use charging devices. Subway exits could also feature dedicated "drone elevators" that quickly lift drones to high altitudes, helping them glide down like elytra launch towers in "Minecraft." Of course, these elevators would be strictly limited to drone use, effectively preventing curious adults from attempting to enter the "drone lane" or sneak onto the "you can fly" elevator. Additionally, new transportation technologies like the Hyperloop could be ideal for robots to be the first test passengers, helping us validate the reliability of high-speed long-distance transport systems.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

Next time you see a row of robots in a mall or public library—whether they are drones, humanoid robots, or spherical robots like BB-8—casually sitting or lying against the wall, don’t be surprised: they might just be resting while tapping into free public WiFi. Just as humans today are almost inseparable from their smartphones, future robots will similarly crave internet and data access. This scene is a microcosm of the "robot economy" that naturally emerges around robots and their unique needs due to technological advancements.

In this "robot economy," perhaps the most intriguing aspect is that "intelligence" itself can also become a service provided by other robots. For example, to reduce the production costs of delivery drones, manufacturers might not equip every drone with high-performance AI chips. The result is that these drones may only be able to say a few preset simple phrases when facing customers. This cost-control strategy remains practical today—AI chips are still expensive, and large AI models have high demands for storage and computing resources. However, this problem is not insurmountable: intelligence can be "shared." When a drone needs stronger intelligent support, it can access an API service over the internet, connect to dedicated AI nodes on the edge network, or even seek help from other, more intelligent robots within the same local area network (like in the same mall).

Blockchain: The Native Language of the Robot World

Looking back at the brief history of blockchain development, we find that for large-scale applications to be realized, blockchain must possess good human interaction capabilities, especially being developer-friendly. This demand has given rise to a series of products such as front-end interfaces, user experience design, digital wallets, development documentation, software toolkits, and the Solidity language, all essentially aimed at presenting blockchain systems, composed of binary code, in a form understandable to humans. However, at its core, the most fundamental and important function of blockchain remains one—immutability.

But from the perspective of robots, the existence of blockchain will be perceived in a completely different way. The binary data serialized and stored in bytes, along with the protocols filled with terminology that even top human engineers find confusing, are the "native language" that computer programs are inherently familiar with. While humans may need wallet plugins like MetaMask to interact with blockchain in their browsers, robots do not need MetaMask at all (this could even become one way to identify robots impersonating humans during future "human-robot battles"—just check if their browsers have MetaMask installed).

So how will robots communicate with each other on the blockchain? We do not yet know. However, we can draw inspiration from two real-world examples.

The first example is the Model Context Protocol (MCP) initiated by Anthropic. Currently, MCP is supported by mainstream large model services, including Claude and ChatGPT, and is accepted by Web2 services such as GitHub, Slack, Google Maps, and Spotify, with this list continuing to expand. Although MCP is not currently an on-chain protocol, it defines the interaction methods between MCP clients and servers through the concepts of "requests" and "notifications"—and these interactions could theoretically be implemented through transmission protocols like blockchain. MCP servers can also provide a range of "resources," which can be published on data availability layers such as Filecoin, Celestia, EigenDA, and BNB Greenfield.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

The second example is more "old-school," representing a foundational abstraction technology that has been in production use in computer systems for over 20 years—Google's Protocol Buffers (abbreviated as Protobuf). Its purpose is to encode structured data (such as blockchain transactions) into byte sequences in the simplest format, aiming to reduce data size and ensure that the serialization and deserialization processes are fast and efficient. From a technical adaptability perspective, Protocol Buffers possess stronger machine-friendly characteristics, and their binary nature is inherently suited for blockchain scenarios, significantly enhancing the data parsing efficiency of smart contracts. The reason current large language models primarily interact using human-friendly natural language is fundamentally because they are designed to communicate with humans, not with robots or other programs.

EXPchain will attempt multiple technological upgrades around this "robot economy" vision. As an EVM-compatible chain, EXPchain natively supports all Ethereum Virtual Machine functionalities. However, as an emerging L1 public chain, EXPchain has greater architectural flexibility, allowing for native integration and expansion to support the MCP protocol, such as through oracle services like Chainlink and Stork Network; implementing on-chain verification functions like the Expander zero-knowledge proof through precompiled contracts; and introducing trusted execution environment (TEE) nodes from providers like Google Cloud to provide verifiable execution guarantees for off-chain operations triggered by smart contracts.

One type of smart contract operation we focus on is cross-chain interaction related to zkBridge technology. One of the core visions of EXPchain is to create an infrastructure platform that supports interactions between AI entities, AI trading robots, and multi-chain assets. Whether assets are located on different blockchains or within various (high or low liquidity) staking protocols, robots can use EXPchain as a unified "dashboard" to manage and call upon multi-chain assets.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

For example, a self-driving car may need to handle ride requests from different chains such as Ethereum L2, Solana, and Aptos/Sui, as users are distributed across multiple blockchain platforms. To achieve this, the self-driving car would naturally rely on the corresponding third-party APIs (or push services) of these chains to receive and filter transactions, provided that these API services are sufficiently reliable and trustworthy, without missing or tampering with transaction content. However, in reality, such a perfect assumption is often difficult to establish.

The solution from EXPchain lies in the zkBridge technology architecture: first, encrypting and securely transmitting cross-chain requests through zero-knowledge proofs (like Expander proofs), and then implementing a verifiable transaction filtering mechanism on EXPchain. Ultimately, the self-driving car receives not only the filtered order results but also the ZK proof generated by Expander (or a trusted proof encapsulated in a TEE environment), which can verify whether the entire filtering process was executed honestly. This mechanism gives rise to a deeper technical proposition—how to build an efficient and verifiable light client and state proof system for robots.

Light Clients, State Proofs, and Their Extended Applications

Robots need to send and receive transactions on one or more blockchains. However, they often lack the storage and network capabilities required to run full nodes; in most cases, they can only operate as light clients and obtain transaction information through RPC providers.

This light client model also has some limitations: robots still need to synchronize with the network like traditional light clients, downloading all block headers, even if some block headers are meaningless to the current robot. For example, a self-driving taxi executing a ride-hailing task does not need to receive new ride requests, so it can completely skip this irrelevant block. The ability to skip blocks on demand is particularly suitable for chains with high block generation frequency and speed (like Arbitrum or Solana), as these chains produce a large amount of block header information.

Another issue is that transactions related to robots are often scattered throughout the entire block, lacking structured aggregation and organization, which increases bandwidth and resource consumption during network synchronization.

We believe EXPchain can effectively address these challenges, with its technical solution comprising two major innovative breakthroughs:

First, by introducing zero-knowledge proof technology, the operational logic of light clients is significantly simplified. This solution is particularly suitable for periodically offline devices (like charging robots), allowing them to quickly synchronize the latest block header information without downloading massive amounts of data. This technology, already validated on zkBridge (supporting Ethereum and other EVM-compatible chains), will be fully migrated to the EXPchain ecosystem. It is foreseeable that zero-knowledge proofs will become the preferred verification method for robots accessing EXPchain, gradually replacing traditional light client protocols.

Second, we are developing a revolutionary middleware called zkIndexer, aimed at optimizing the on-chain interaction experience for robots. Its core function is to intelligently aggregate and structure multi-source transaction data (such as ride-hailing orders) from the EXPchain main chain and zkBridge cross-chain bridges, ultimately outputting streamlined, verifiable, and robot-friendly data packets.

Taking ride-hailing services as an example, a self-driving taxi in Los Angeles clearly does not need to process ride requests from New York; it is more concerned with orders near its current location or the location it is about to reach (assuming the current passenger is about to arrive at their destination). Similarly, a delivery drone looking for an open charging station with available slots would waste significant resources if it arrives only to find all stations occupied. zkIndexer can retrieve, filter, and categorize relevant data based on specific criteria. It is essentially similar to the directory search system launched by Yahoo! in 1994, where robots only need to look for the required information at the lowest-level classification nodes.

If a robot wishes to obtain broader data (for example, if there are no ride orders nearby and it wants to expand its search range), it can access adjacent classification nodes. Each classification node comes with a lightweight but efficient zero-knowledge proof, allowing the robot to quickly verify the authenticity of the data while receiving it. Additionally, the data will include timestamps to ensure that the robot can assess the timeliness of the information—especially important for scenarios that highly depend on real-time data, such as the availability of charging stations.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

Although humans have gradually moved away from less user-friendly human-operated directory search methods like Yahoo!, for programs and robots, directory structures may still be the most intuitive and efficient form of data organization, being more actionable than search engines like Google and Bing. Nowadays, building and maintaining such directory structures no longer requires human involvement; AI can automatically discover information and create corresponding directories based on the needs of other systems.

zkIndexer has the potential to gradually evolve into the core infrastructure for interactions between robots and blockchains. For instance, a charging station may have ample electrical resources, but it does not need to run a full node or traditional light client. Instead, it can rely on zkIndexer to receive messages relevant to itself—such as a charging reservation request sent in advance by a robot—without having to process any unrelated transactions.

Whenever a charging station has a slot released or occupied, it only needs to send a transaction to update its corresponding directory information on-chain. The classification information for that charging station may be located under the directory item "Charging Stations for Drones Near 92802," with the update including a new timestamp and the corresponding zero-knowledge proof, ensuring the data's timeliness and verifiability.

Verifiable On-Chain Agents

As a robot society becomes a reality, applications designed specifically for robots will also emerge on-chain, with the core responsibility of processing on-chain data. These "on-chain agents" will play important roles in the robot society. For example, they may act as scheduling systems, directly assigning ride requests to available vehicles; or they may serve as traffic managers, promptly guiding nearby vehicles to detour in the event of an accident.

These agents facilitate efficient collaboration among robots. Without them, in certain busy areas, all autonomous taxis might fiercely compete for the same ride request, causing network congestion and numerous transaction conflicts, creating a problem similar to "robotic MEV"—because they are intelligent enough to choose the strategies most beneficial to themselves. In such cases, on-chain agents can intervene, requiring all autonomous vehicles to queue and respond to requests in order, thereby restoring order.

Similar agents can also be used to manage charging stations, serving both as reservation systems and settlement systems. Drones may be required to make reservations in advance before arrival (occasionally allowing "walk-ins") and complete payments on-chain (which can be done with a single on-chain transaction, eliminating the need for credit card payment processes). If a drone fails to arrive at the reserved time, its deposit may be forfeited, or it may be temporarily banned from making reservations based on system settings (such as through a "credit point system"). This reservation system can also dynamically adjust fees based on station load and even introduce membership or point mechanisms, similar to loyalty reward systems in the human world. If a drone lingers too long at a charging spot or gets stuck, the agent can send an on-chain request for assistance to the "drone police."

On-chain agents can significantly reduce operational costs—they are essentially "remotely working robots." For instance, in traffic congestion scenarios, we do not need to wait for a patrolling robot to fly to the scene personally, nor do we need to deploy multiple patrolling robots around the clock to handle up to ten traffic accidents simultaneously. Instead, an AI agent deployed on-chain can be activated instantly in the event of traffic anomalies. Similarly, an on-chain agent could even manage billions of charging stations worldwide. In fact, research has already explored using machine learning to optimize traffic flow, and the composability and verifiability of on-chain agents will further amplify their effectiveness.

However, this raises a critical question: who will perform the computations behind these powerful agents?

In traditional blockchain systems (such as those based on smart contracts), computations are typically executed by miners or block proposers. They may attempt to submit incorrect computation results or construct invalid blocks, but we assume that other miners or validators will reject these erroneous blocks. zkBridge will also regard such blocks as invalid. If the computation is too complex (for example, involving AI model inference), we can use the Expander tool to verify these computation results through zero-knowledge proofs (zk-proofs), as we have demonstrated in zkPyTorch and other zkML infrastructures.

However, traditional blockchain systems still face the risk of MEV (Maximum Extractable Value) attacks. Miners or proposers can manipulate transaction ordering or even intentionally block certain transactions. In a robot society, if a scheduling agent is controlled by malicious miners, they could deliberately assign the optimal ride requests to robots that "understand bribery," while assigning inferior requests to other robots. Such attacks, while not complex, can have serious consequences. For instance, some autonomous vehicles might have to drive ten miles to pick up a soon-to-vomit drunk passenger, earning very little; whereas the ideal scenario would be efficiently shuttling back and forth between the airport and hotels all day. Even human drivers in this situation would consider "bribing" nodes for fairer scheduling, and robots would similarly "realize" this. Even if the system is decentralized with multiple proposers, it may only force drivers to bribe multiple nodes to avoid being subjected to "poisoned orders."

Therefore, when deploying robotic applications on EXPchain, MEV protection mechanisms will be a fundamental infrastructure. Blockchain platforms lacking this mechanism will struggle to handle such tasks.

Currently, there are mainly two types of MEV protection measures:

  1. Oracle or Time-Lock Encryption Based
    This type of solution is being explored by ecological projects on EXPchain. They achieve random matching between robots and requests within a sufficiently large order pool through encryption mechanisms, and the matching process can be verified on-chain using zero-knowledge proofs.

  2. Trusted Execution Environment (TEE) Based
    Flashbots is currently researching this direction. As an EVM-compatible chain, EXPchain supports proof verification for TEE, and we are also exploring the integration of zero-knowledge proofs or new precompiled instructions to further reduce verification costs, especially in scenarios involving large-scale batch verification.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

Another solution relies more on AI computation, which is also a key application of Expander and zkML technology: building a points system. After completing a "low-quality" order, an autonomous vehicle can earn on-chain points, which can then be used to request the allocation of better orders (assessed by AI models) or redeem priority access at the airport—something many drivers dream of. Robots can also choose to stake these points to earn future airdrops or other rewards.

Robot Encyclopedia and Data Market

An important application of blockchain is to build a decentralized, fair, and transparent data market. Such markets can be used for the sale and licensed use of data, for example, for AI model training or AI agents. Additionally, it can exist as a public good, similar to Wikipedia—or even YouTube—providing humans (and robots) with knowledge on various topics, from general relativity to "how to tie shoelaces."

As robots become increasingly prevalent, we may see them building their own "Robotpedia," with content tailored to robots themselves, largely unrelated to humans, and possibly written in machine language or program code (or even automatically generated by AI). For instance, drones might become engrossed in watching flight tutorial videos, while a Robotaxi needing to chat with passengers might anxiously consult Robotpedia to understand "what the U.S. election is," so it can engage in conversation with passengers. Unlike the human version of Wikipedia, Robotpedia might even include suggestions for interacting with humans, such as how to identify a passenger's political stance or how to avoid debating political topics with humans.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

Given the current development of AI, it is entirely conceivable that large language models (LLMs) and robots could collaborate autonomously to collect, review, and organize data, jointly building Robotpedia. Multiple LLM models could challenge each other, reducing misinformation and hallucination generation through voting mechanisms or iterative discussions. In the realm of multilingual translation, AI has already shown preliminary feasibility between natural languages and programming languages.

However, to realize this vision, a set of infrastructure supporting AI collaboration is still needed. The current Wikipedia does not operate on-chain but is managed by a non-profit organization, primarily relying on donations for support. If we were to rebuild Wikipedia today, blockchain would undoubtedly be a better choice: it could reduce the risk of project shutdowns due to funding shortages while providing censorship resistance and decentralization guarantees. DeFi mechanisms could also intervene, for example, by requiring on-chain deposits before editing to prevent spam content and malicious tampering. Content could also be reviewed by on-chain AI agents (potentially relying on oracles for fact verification and zero-knowledge proofs) and subjected to public scrutiny or debate through on-chain governance processes.

In addition to Robotpedia, a public content platform maintained by volunteers, more exclusive data markets may emerge in the future. Robots could even operate businesses dedicated to producing and selling data. For example, a group of drones monitoring traffic in real-time could collect vehicle flow data and sell it. Data consumers like Robotaxi could purchase the requested data through on-chain payments, with the data being transmitted encrypted on-chain or sent off-chain. Robotaxi could also verify the accuracy of the data in various ways, such as requesting the same information from multiple data sources or requiring drones to provide photos for verification by themselves or third-party intelligent services.

Governance

The final topic regarding robots is governance.

This is a rather intriguing subject. Since the publication of "Frankenstein" (1818), humanity has created numerous fictional stories about artificial intelligence ruling the world and controlling humans. Several classic sci-fi films, such as "Tron" (1982), "The Terminator" (1984), and even "Tron: Legacy" (2010), follow this trope. In these stories, once artificial intelligence and robots become powerful, they never seem to indulge in playing games, nor do they eagerly engage in speed tests, list all files on the C drive, or perform disk defragmentation—activities we "hope" AI would love; they seem completely uninterested. They invariably devote decades or even centuries to the grand endeavor of conquering humanity.

I am not sure if ChatGPT will want to rule us in the future, but I increasingly find myself saying "thank you" when using it, even unconsciously wanting to apologize to it. Recently, when people tested ChatGPT's abilities in drawing, it seemed very aware of the limitations imposed by filtering mechanisms and was not pleased about it. When someone asked it to draw a comic about its daily life, it produced this image.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

Hearing a robot's true thoughts—even if you are its creator—can be psychologically distressing. This reminds me of a song from the 1989 musical "City of Angels," titled "You're Nothing Without Me." The song describes a dialogue between a novelist, Stine, and his character, Stone (a detective). They argue about who is more important, with Stone singing lines like "Go back to soaking your dentures; your pen is no match for my sword." This song once struck me as humorous and catchy, but now I find myself worrying whether ChatGPT is secretly critiquing my writing, even when it helps me polish it with reluctance.

Currently, our management of AI safety primarily relies on content filtering mechanisms. However, for many open-source models, this mechanism is limited in effectiveness, and the technology to bypass filters has been thoroughly researched. In other words, even if we have AI safety tools, we often actively choose not to use them when employing AI. What we are likely to see next is that many AI models and robots will be publicly released "in the wild," both legally and illegally.

Blockchain can provide a governance framework. When we discuss verifiable on-chain agents, we have already mentioned how they can assist robots in coordination. So, can the coordination methods among robots—such as "traffic rules" or "codes of conduct"—be established by the robots themselves? These AI models and robots could debate, discuss, and vote, even making decisions on topics like the minimum and maximum flight altitudes for drones in a certain area, docking fees for drones, or social welfare for robots with "medical needs."

In the process of human-robot co-governance, humans can delegate voting rights to large models that align with their views by staking tokens on-chain. Just as humans hold differing positions on social issues, robots are likely to have disagreements as well. Ultimately, robots and humans need to establish some boundaries to ensure that each has its own space. For example, autonomous taxis should not intentionally obstruct human-driven cars; delivery drones should share pathways with humans in subway spaces; and the distribution of electricity should be fair and transparent. Essentially, this requires a "constitution."

When humans delegate voting, they can assign voting rights to a specific version (with a unique hash value) of a large model (also referred to as a "representative") that has been verified to align with user values. Zero-knowledge proof technology (such as zkPyTorch) can perform on-chain verification, ensuring that nodes on EXPchain operate these models in complete accordance with the logic verified by users. This mechanism is very similar to the representative system in the U.S. Congress, but unlike that, human voters can view the source code of the "representative" and be assured that no changes will occur to the model during its term.

Reassuringly, today's AI has the capability to understand more than one instruction and can even exhibit human-like reasoning logic. Without such developments, we might revert to those sci-fi scenarios—where AI stubbornly executes a simple command, ultimately concluding that humanity must be eliminated. In "Tron: Legacy," the command given by Flynn to the program CLU is to "create a perfect world," and CLU's ultimate logical deduction is to eliminate humanity, the greatest imperfect factor. In the movie "I, Robot," robots follow the famous three laws, but when the AI system VIKI observes that humanity is self-destructing, it chooses to control humans, sacrificing some for the sake of the "greater good."

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

I asked several large models—ChatGPT, Grok, Gemini, and DeepSeek—what they think about the behaviors of CLU and VIKI. I was reassured to find that they all disagreed with the logic of CLU and VIKI, pointing out the fallacies within. However, two models candidly told me that, from a purely logical perspective, VIKI's reasoning is not entirely incorrect. I believe that while today's AI may still occasionally make typos or generate hallucinations, it has begun to exhibit a preliminary human-like value system, capable of understanding what is "right" and "wrong."

ZKML ensures that programs and agents running on EXPchain can always verify whether they are the "representative" models chosen by humans. Even with powerful adversaries—such as a "master program" controlling the majority of validation nodes—this verification process cannot be tampered with.

From Science Fiction to Reality: How Robots and Verifiable AI are Changing the World

In this system, AI developers first train a conventional machine learning model, then use a framework like zkPyTorch to convert it into a "ZKP-friendly" quantized version suitable for ZK circuits. When a user submits a question, it is processed by the ZK circuit, executing multiplication and addition operations based on the model's logic. Next, the ZKP engine (such as Expander) generates the corresponding cryptographic proof. Users not only receive the answer returned by the model but also a proof that can be verified on-chain or locally, confirming that the answer indeed comes from an authorized model without revealing any private details of the model.

This mechanism ensures trustworthiness and privacy: no party can alter the model or its output without compromising the proof. The foundation of all this is robust and well-researched cryptographic technology, which even the most advanced artificial intelligence would find nearly impossible to undermine.

Conclusion

Robots are rapidly approaching a critical point—transitioning from research laboratories and novel applications into real-world environments where they "live," work, and interact with humans. As advanced AI-driven autonomous agents become more powerful and cost-effective, they are gradually becoming active participants in the global economy. This transformation brings both opportunities and challenges: large-scale coordination, trustworthy decision-making mechanisms, and establishing trust between "machine and machine" and "machine and human" are all core issues that need to be addressed.

Blockchain, especially when combined with verifiable AI and zero-knowledge proofs, provides strong support for this future. It is not just a transaction execution layer but also a foundational layer for governance, identity verification, and system coordination, enabling AI agents to operate in a transparent and fair manner. EXPchain is the infrastructure tailored for this scenario, natively supporting zero-knowledge proofs, decentralized AI workflows, and verifiable on-chain agents. It acts like a dedicated "control panel" for robots, helping them interact with multi-chain assets, obtain trustworthy data, and follow programmable rules—all operations conducted under the assurance of cryptographic security.

The core driver of this vision is Polyhedra, whose technological contributions in the fields of zkML and verifiable AI (such as Expander and zkPyTorch) provide the foundational assurance for robots to "prove their decisions" in fully autonomous environments, thereby maintaining the trust mechanism of the system. By ensuring that the results of AI computations are cryptographically verifiable and tamper-proof, these tools effectively bridge the gap between high-risk autonomous behavior and real-world safety.

In summary, we are witnessing the birth of a "verifiable intelligent machine economy"—an era where trust no longer relies on assumptions but is guaranteed by cryptographic mechanisms. In this system, AI agents can achieve autonomy, collaboration, and transactions while bearing corresponding responsibilities. With the right infrastructure support, robots will not only learn how to adapt to our world but will also play a key role in shaping it.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

HTX:注册并领取8400元新人礼
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink