DeepSeek: A Wake-Up Call for Responsible Innovation and Risk Management

CN
AiCoin
Follow
5 hours ago

Source: Cointelegraph
Original: “DeepSeek: A Wake-Up Call for Responsible Innovation and Risk Management”

Author: Dr. Merav Ozair

Since its release on January 20, DeepSeek R1 has attracted widespread attention from users as well as global tech tycoons, governments, and policymakers—ranging from praise to skepticism, from adoption to bans, from the brilliance of innovation to immeasurable privacy and security vulnerabilities.

Who is right? The short answer: everyone is right, and everyone is wrong.

This is not a "Sputnik moment."

DeepSeek has developed a large language model (LLM) that can perform comparably to OpenAI's GTPo1, with the time and cost required being only a small fraction of what OpenAI (and other tech companies) would spend to develop their own LLM.

Through clever architectural optimizations, the costs of model training and inference have been significantly reduced, allowing DeepSeek to develop an LLM in 60 days at a cost of less than $6 million.

Indeed, DeepSeek deserves recognition for actively seeking better ways to optimize model structures and code. This is a wake-up call, but it is far from being a "Sputnik moment."

Every developer knows there are two ways to improve performance: optimize the code or "throw" a large amount of computational resources at it. The latter is extremely costly, so developers are always advised to maximize architectural optimization before increasing computational resources.

However, with the high valuations of AI startups and massive investments pouring in, developers seem to have become complacent. If you have a budget of billions of dollars, why spend time optimizing model structures?

This serves as a warning to all developers: return to the basics, innovate responsibly, step out of your comfort zone, break free from conventional thinking, and do not fear challenging the norm. There is no need to waste money and resources—use them wisely.

Like other LLMs, DeepSeek R1 still has significant shortcomings in inference, complex planning capabilities, understanding of the physical world, and long-term memory. Therefore, there is nothing revolutionary here.

It is now time for scientists to go beyond LLMs, address these limitations, and develop "next-generation AI architectural paradigms." This may not be LLMs or generative AI—but a true revolution.

Paving the Way for Innovation

DeepSeek's approach may encourage developers worldwide, especially in developing countries, to innovate and develop their own AI applications, regardless of resources. The more people involved in AI research and development, the faster the pace of innovation and the more likely meaningful breakthroughs will occur.

This aligns with Nvidia's vision: to make AI affordable and enable every developer or scientist to create their own AI applications. This is precisely the significance of the DIGITS project announced in early January this year—a desktop GPU priced at $3,000.

Humanity needs "everyone on board" to address urgent issues. Resources may no longer be a barrier—it is time to break old paradigms.

Meanwhile, the release of DeepSeek is also a wake-up call for actionable risk management and responsible AI.

Read the Terms Carefully

All applications have terms of service, which the public often overlooks.

Some alarming details in DeepSeek's terms of service may affect your privacy, security, and even business strategy:

Data Retention: Deleting your account does not mean your data is deleted—DeepSeek still retains your data.

Monitoring: The application has the right to monitor, process, and collect user inputs and outputs, including sensitive information.

Legal Exposure: DeepSeek is governed by Chinese law, which means that state agencies can access and monitor your data upon request—the Chinese government is actively monitoring your data.

Unilateral Changes: DeepSeek can update the terms at any time—without your consent.

Disputes and Litigation: All claims and legal matters are governed by the laws of the People's Republic of China.

These actions clearly violate the General Data Protection Regulation (GDPR) and other GDPR privacy and security violations listed in complaints filed by Belgium, Ireland, and Italy, which have temporarily banned the use of DeepSeek.

In March 2023, Italian regulators temporarily banned OpenAI's ChatGPT due to GDPR violations, which was only restored a month later after compliance improvements. Will DeepSeek follow a similar compliance path?

Bias and Censorship

Like other LLMs, DeepSeek R1 exhibits hallucinations, biases in training data, and behaviors that align with Chinese political stances on certain topics, such as censorship and privacy.

As a Chinese company, this is to be expected. The "Generative AI Law," applicable to AI system providers and users, stipulates in Article 4: this is a censorship rule. This means that those who develop and/or use generative AI must support "socialist core values" and comply with relevant Chinese laws.

This is not to say that other LLMs do not have their own biases and "agendas." This highlights the importance of trustworthy and responsible AI, as well as the necessity for users to adhere to strict AI risk management.

Security Vulnerabilities of LLMs

LLMs may be susceptible to adversarial attacks and security vulnerabilities. These vulnerabilities are particularly concerning as they will affect any organization or individual building applications based on that LLM.

Qualys has conducted vulnerability testing, ethical risk, and legal risk assessments on DeepSeek-R1's LLaMA 8B slimmed-down version. The model failed half of the jailbreak tests—attacks that bypass the built-in security measures and ethical guidelines of the AI model.

Goldman Sachs is considering using DeepSeek but requires a security review, such as injection attacks and jailbreak testing. Regardless of whether the model originates from China, there are security risks for any business before using AI model-driven applications.

Goldman Sachs is implementing the right risk management measures, and other organizations should follow suit before deciding to use DeepSeek.

Learning from Experience

We must remain vigilant and diligent, implementing adequate risk management before using any AI system or application. To mitigate any "agenda" biases and censorship issues posed by LLMs, we might consider adopting decentralized AI, preferably in the form of decentralized autonomous organizations (DAOs). AI knows no borders, and perhaps now is the time to consider establishing unified global AI regulations.

Author: Dr. Merav Ozair

Related: How Decentralized Finance (DeFi) Achieves Secure Scalable Development in the Age of Artificial Intelligence (AI)

This article is for general informational purposes only and should not be construed as legal or investment advice. The views, thoughts, and opinions expressed in this article are solely those of the author and do not necessarily reflect or represent the views and positions of Cointelegraph.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX:注册返20%
链接:https://www.okx.com/zh-hans/join/aicoin20
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink