DeAI Potential Stock OORT: Breaking the Bottleneck of AI Development and Inspiring Everyone to Contribute Data Enthusiastically

CN
链捕手
Follow
4 hours ago

Author: ChainCatcher

The AI sector has entered an explosive era. According to the research report "2024 AI Investment Report" by consulting firm Dealroom, global AI investment is expected to reach $65 billion, accounting for one-fifth of all venture capital. Goldman Sachs' research department also stated that global AI investment could approach $200 billion by 2025.

Thanks to the AI boom, funds are flocking to AI targets. For example, the A-share company Cambricon has surged over 560% since its low in February this year, with a market capitalization exceeding 250 billion RMB; the US stock company Broadcom has surpassed a market value of $1 trillion, becoming the eighth largest publicly traded company in the US.

The combination of AI and Crypto is also showing a hot trend. During the artificial intelligence conference hosted by Nvidia, Bittensor (TAO) led with a market value of over $4.5 billion, while assets like Render (RND) and Fetch.ai (FET) have seen rapid value growth.

Following large language models, AI Agents have become the engine of this round of AI market. For instance, the token of GOAT surged over 100 times in 24 hours, and ACT increased nearly 20 times in a single day, igniting the Crypto world's enthusiasm for AI Agents.

However, behind the rapid development of AI, there are also concerns. According to an article by Dr. Max Li, founder and CEO of OORT, published in Forbes titled "AI Failures Will Surge in 2025: A Call for Decentralized Innovation," the AI industry faces numerous issues such as data privacy, ethical compliance, and trust crises caused by centralization, which increase the risk of AI failures. Therefore, decentralized innovation has become an urgent priority.

Currently, OORT has established one of the world's largest decentralized cloud infrastructures, with network nodes covering over 100 countries, achieving millions of dollars in revenue, and launching the open-source Layer 1 Olympus protocol (its consensus algorithm is "Proof of Honesty" PoH, protected by US patents). Through the native token OORT, it encourages everyone to contribute data, creating an incentive loop. Recently, OORT launched OORT DataHub, marking a further step towards global, diverse, and transparent data collection, laying a solid foundation for the explosion of DeAI.

OORT Born from Chance in the Classroom

To understand the OORT project, one must first understand the problems OORT aims to solve. This involves discussing the current bottlenecks in AI development, primarily data and centralization issues:

1. The Disadvantages of Centralized AI

1.1 Lack of Transparency Leading to Trust Crisis. The decision-making process of centralized AI models is often opaque and viewed as a "black box." Users find it difficult to understand how AI systems make decisions, which can lead to severe consequences in critical applications such as medical diagnosis and financial risk control.

1.2 Data Monopoly and Unequal Competition. A few large tech companies control vast amounts of data resources, creating a data monopoly. This makes it difficult for new entrants to obtain sufficient data to train their AI models, hindering innovation and market competition. Additionally, data monopolies may lead to the misuse of user data, further exacerbating data privacy issues.

1.3 Ethical and Moral Risks Are Hard to Control. The development of centralized AI has raised a series of ethical and moral issues, such as algorithmic discrimination and bias amplification. Furthermore, the application of AI technology in military and surveillance fields has raised concerns about human rights, security, and social stability.

2. Data Bottleneck

2.1 Data Scarcity. In the booming development of artificial intelligence, the issue of data scarcity has gradually emerged as a key factor limiting further progress. The demand for data from AI researchers has exploded, yet the supply of data has struggled to keep up. Over the past decade, the continuous expansion of neural networks has relied on vast amounts of data for training, as seen in the development of large language models like ChatGPT. However, traditional datasets are nearing exhaustion, and data owners are beginning to restrict content usage, making data acquisition increasingly difficult.

The causes of data scarcity are multifaceted. On one hand, data quality varies widely, with issues of incompleteness, inconsistency, noise, and bias severely affecting model accuracy. On the other hand, scalability challenges are significant; collecting sufficient data is costly and time-consuming, maintaining real-time data is difficult, and manual annotation of large datasets presents a bottleneck. Additionally, access and privacy restrictions cannot be overlooked, as data silos, regulatory constraints, and ethical issues make data collection arduous.

Data scarcity has profound implications for AI development. It limits model training and optimization, potentially forcing AI models to shift from pursuing scale to becoming more specialized and efficient. In industry applications, achieving precise predictions and decisions becomes challenging, hindering AI's greater role in fields like healthcare and finance.

To address data scarcity, researchers and companies are actively exploring various avenues. For instance, attempts to collect non-public data face issues of legality and quality; focusing on specialized datasets, though promising, requires validation of their availability and practicality; generating synthetic data holds potential but also has numerous drawbacks. Furthermore, optimizing traditional data collection methods and exploring decentralized data collection solutions have become important directions for solving data scarcity. In summary, the issue of data scarcity urgently needs resolution to promote the continuous and healthy development of AI.

2.2 Problems Caused by Centralized AI's "Data Black Box," Such as Privacy Issues, Lack of Diversity, and Opacity.

In the current model, the data collection and processing processes lack transparency, leaving users often unaware of the fate and usage of their personal data. Many machine learning algorithms require vast amounts of sensitive user information for training, which poses risks of data leakage. If privacy protection measures are inadequate, users' private information may be misused, leading to a trust crisis.

The lack of diversity is another significant drawback. Currently, the data relied upon by centralized AI is often concentrated in a few fields or regions, with most mainstream international datasets primarily in English, resulting in a singular data source. This makes AI models trained on such data perform poorly in diverse real-world scenarios, easily leading to bias. For example, when handling multilingual tasks or data from different cultural backgrounds, models may struggle to understand and respond accurately, limiting the broad applicability and fairness of AI technology.

Opacity permeates the entire data processing workflow. From the source of data collection to processing methods and how it ultimately translates into decisions, these stages are like a black box to outsiders. This lack of transparency not only makes it difficult for users to assess data quality but also obscures whether models are biased due to data, thereby affecting the fairness and accuracy of decisions. In the long run, this is detrimental to the healthy development of AI technology and its widespread acceptance in society.

2.3 Challenges in Data Collection Have Become Key Factors Hindering AI Development. According to Dr. Max Li's column in Forbes, common issues often arise from the following aspects:

(1) Data Quality Issues.

Incompleteness: Missing values or incomplete data can impair the accuracy of AI models.

Inconsistency: Data collected from multiple sources often have mismatched formats or conflicting entries.

Noise: Irrelevant or erroneous data can dilute meaningful insights and confuse models.

Bias: Data that does not represent the target population can lead to biased models, raising ethical and practical concerns.

(2) Scalability Issues.

Quantity Challenges: Collecting enough data to train complex models can be costly and time-consuming.

Real-Time Data Requirements: Applications like autonomous driving or predictive analytics require a continuous and reliable data stream, which can be challenging to maintain.

Manual Annotation: Large datasets often require human labeling, creating severe bottlenecks in time and manpower.

(3) Access and Privacy Issues.

Data Silos: Organizations may store data in isolated systems, limiting access and integration.

Compliance: Regulations like GDPR and CCPA restrict data collection practices, especially in sensitive areas like healthcare and finance.

Ethical Issues: Collecting data without user consent or in a non-transparent manner can lead to reputational and legal risks.

Other common bottlenecks in data collection include a lack of diverse and truly global datasets, high costs associated with data infrastructure and maintenance, challenges in processing real-time and dynamic data, and issues related to data ownership and licensing.

OORT emerged from practical needs, and its establishment was somewhat serendipitous. In 2018, Max was teaching graduate students at Columbia University and needed to complete a project requiring training an AI agent. Due to the high costs of traditional cloud services, students found themselves in a predicament. To solve this dilemma, Max conceived the idea of creating a decentralized AI platform "OORT": initially, they explored using blockchain as an incentive layer to connect underutilized nodes globally, building a preliminary prototype of a decentralized cloud solution, and began experimenting with PayPal for payments and credit allocation, laying the groundwork for the birth of OORT's native token.

Today, OORT has become a leader in DeAI, combining blockchain verification with a global network of data centers and edge devices to design state-of-the-art AI infrastructure.

In response to the current issue of missing AI training data, OORT connects underutilized nodes globally through blockchain to facilitate data collection. To incentivize participation and address the challenges of cross-border micropayments, OORT has adopted cryptocurrency payments, thereby establishing a unique business model. Its OORT DataHub product launched on December 11, primarily addressing data collection and annotation bottlenecks, with a customer base that includes SMEs and some leading global tech companies. The product's decentralized nature truly realizes global, diverse, and transparent data collection, allowing global data contributors to easily earn rewards through cryptocurrency, while blockchain technology ensures that data sources and usage records are maintained on-chain, effectively addressing many pain points faced by Web2 cloud services and AI companies. As of the time of writing, OORT DataHub has recorded data uploads from over 80,000 contributors worldwide.

Strong Research and Academic Background, Funded by Giants, Serving Over 10,000 Enterprises and Individuals

The OORT team is formidable. Max is not only the founder and CEO of OORT but also a faculty member at Columbia University, co-founder of Nakamoto & Turing Labs in New York, founding partner of Aveslair Fund in New York, and holds significant influence in the tech field with over 200 international and US patents (both granted and pending). He has published numerous papers in well-known academic journals covering areas such as communications, machine learning, and control systems. Additionally, he serves as a reviewer and technical program committee member for leading journals and conferences in various fields, as well as a funding reviewer for the Natural Sciences and Engineering Research Council of Canada.

Before founding OORT, Max collaborated with Qualcomm's research team on 4G LTE and 5G system design. Max is also a co-founder of Nakamoto & Turing Labs, a New York City-based lab focused on blockchain and AI investment, education, and consulting.

Max is also a regular contributor to Forbes magazine. In his latest articles, "AI Failures Will Surge in 2025: A Call for Decentralized Innovation" and "Focusing on Decentralized AI in 2025: The Fusion of Artificial Intelligence and Cryptocurrency," Max emphasizes the development and importance of decentralized AI in the cryptocurrency field, highlighting the transformation and potential it brings. It is evident that Max is a strong advocate for decentralized AI.

Michael Robinson, the chairman of the OORT Foundation, is also a managing board member of Agentebtc, a managing board member of Burble, a managing partner of Aveslair Fund, co-founder and chairman of the Reed-Robinson Fund, and a partner at Laireast. He has extensive cross-disciplinary experience and is dedicated to promoting the integration of global business and technology.

Other core team members come from world-renowned institutions and organizations, such as Columbia University, Qualcomm, AT&T, and JPMorgan Chase. Additionally, OORT's development has received support from well-known crypto venture capital firms like Emurgo Ventures (ADA Cardano Foundation) and backing from Microsoft and Google.

As of now, OORT has raised $10 million from prominent investors, including Taisu Ventures, Red Beard Ventures, and Sanctor Capital, and has received funding from Microsoft and Google. It has also established partnerships with numerous industry giants, such as Lenovo Imaging, Dell, Tencent Cloud, and BNB Chain.

OORT completed early project explorations from 2018 to 2019 and focused on research and development from 2020 to 2021, developing a series of core technologies, including data storage, computing, and management. It began building the infrastructure for the OORT ecosystem during this period. OORT launched decentralized storage nodes, Edge Device, forming a preliminary product prototype and laying the technical foundation for subsequent commercialization.

Since 2022, OORT has begun exploring commercialization pathways:

  1. OORT has built a data marketplace platform that connects data providers and data users. Data providers can sell their data on the platform, while data users can purchase the data they need for AI model training and other purposes. OORT generates revenue by charging transaction fees and has established a reward mechanism to encourage data providers to offer high-quality data, providing rewards based on factors such as data quality, diversity, and usage frequency.

  2. OORT offers decentralized cloud storage and computing services, allowing businesses and individuals to rent OORT's cloud resources to run their AI applications. Compared to traditional cloud services, OORT's decentralized cloud services offer higher security, lower costs, and better scalability. Users can flexibly choose the cloud resources they need based on their actual requirements and pay according to usage.

  3. For the specific needs of large enterprises, OORT provides customized AI solutions. These solutions are based on OORT's decentralized technology architecture, offering one-stop services for data management, model training, and intelligent decision-making. By collaborating with enterprises, OORT not only secures a stable source of income but also accumulates industry experience to further optimize its products and services.

Currently, OORT serves over 10,000 enterprise and individual clients worldwide, with its network nodes generating millions of dollars in revenue, proving the feasibility of its business model.

Everyone Can Participate in the Development of AI and Benefit from It

OORT has several products, including OORT Storage, OORT Compute, and OORT DataHub. Based on the application layers of these three products, OORT also has a solution called OORT AI, which helps enterprises quickly integrate intelligent assistants. The functionalities of the three main products are as follows:

  • OORT Storage is currently the only decentralized solution that can match the performance of AWS S3 storage services, with numerous registered enterprise and individual clients.

  • OORT Compute aims to achieve decentralized data analysis and processing, providing better cost-effectiveness for AI model training and inference. It is still in preparation and has not yet launched.

  • Launched on December 11, OORT DataHub marks a new development phase for the project and is expected to become a new focus for OORT, with "cash cow" expectations.

OORT DataHub provides an innovative way to collect and label data, allowing global contributors to gather, classify, and preprocess data for AI applications. By leveraging blockchain technology, it addresses the issues of single data sources and low labeling efficiency found in traditional data collection methods while enhancing security. Notably, OORT DataHub has successfully launched on the Shenzhen Data Exchange, opening a new avenue for AI companies and research institutions to obtain high-quality, diverse, and compliant datasets.

OORT DataHub offers users various ways to earn points, such as daily logins, completing tasks, verifying tasks, and referral programs. Users can accumulate points to qualify for monthly lotteries and receive USDT equivalent to dollars as incentives.

This product effectively eliminates intermediaries in data collection, providing a safer, participant-controlled process, aligning with the growing call for more ethical approaches to AI.

Based on OORT DataHub, OORT has also launched the OORT DataHub Mini App, which will seamlessly integrate with Telegram's mini-app platform, enabling users to contribute data more easily and participate in decentralized data collection, further expanding the OORT ecosystem and increasing user engagement. This integration is expected to bring millions of users and drive the platform's development.

OORT DataHub embodies OORT's vision, which is to enable everyone to participate in the development of AI and benefit from it, regardless of their geographical location, economic status, or technical background. OORT's mission is to provide reliable, secure, and efficient decentralized AI solutions, promoting the widespread adoption and application of AI technology globally while ensuring data privacy, security, and ethical compliance.

Through a decentralized data marketplace model, OORT breaks the data monopoly, allowing data providers from around the world to upload their data to the platform for trading and sharing. Whether individual users or enterprise users, anyone with valuable data can earn corresponding benefits on the OORT platform, achieving fair distribution of data value.

The decentralized architecture ensures that data is no longer stored in a single server or data center but is distributed across nodes worldwide. Each node encrypts the data, and only authorized users can access and use it. Additionally, the immutable nature of blockchain technology ensures the integrity and authenticity of the data, effectively preventing data leakage and tampering risks.

Since OORT's decentralized network consists of numerous nodes, there is no single point of failure. Even if a node is attacked or malfunctions, other nodes can continue to operate normally, ensuring the stability and reliability of the entire system. Furthermore, the decentralized consensus mechanism makes it difficult for attackers to alter system data or control the entire network, enhancing system security. For example, in the face of distributed denial-of-service (DDoS) attacks, OORT's distributed architecture can disperse attack traffic, allowing the system to maintain normal operations and ensuring that users' data and services are unaffected.

On the other hand, OORT addresses data collection, control, and management issues by providing innovative data collection and labeling methods, establishing strict data quality control and verification mechanisms, and employing advanced AI algorithms for intelligent data management and analysis.

OORT places a high priority on data protection and privacy compliance, strictly adhering to data protection regulations worldwide, such as GDPR and HIPAA, ensuring that user data is processed legally.

By reviewing OORT's existing product line and product progress, combined with OORT's vision for the future, we can see that OORT has built a fair, transparent, and trustworthy AI ecosystem.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink