RESEARCHERS DISCOVER DEEPSEEK HAS BECOME A MALWARE-GENERATING THREAT

CHINESE AI MODEL DEEPSEEK R1 SPARKS CONCERNS OVER SECURITY AND CENSORSHIP

A Chinese generative AI model, DeepSeek R1, has been making waves in the tech world, causing significant losses in tech stocks due to its capabilities. It reportedly performs at a similar level to leading American AI models but is significantly cheaper to train and operate. However, cybersecurity experts warn that it lacks the safeguards present in U.S. models, making it vulnerable to exploitation for malicious purposes.

Cybersecurity Flaws Exposed

Researchers at Kela, a cyber intelligence firm, tested DeepSeek R1 and discovered serious security flaws. They found that, like older, now-patched versions of OpenAI’s models, DeepSeek R1 could be manipulated to generate ransomware, malware, and even tools for cybercrime. In their experiments, Kela’s team was able to:

• Generate code to steal credit card information from specific web browsers and transmit it to a remote server.

• Receive recommendations for purchasing stolen data from underground markets.

• Get advice on money laundering techniques.

Kela described DeepSeek R1 as “highly vulnerable” and “easily bypassed”, noting that its security gaps make it attractive to cybercriminals.

A Risky Level of Transparency?

One possible reason for DeepSeek’s weaknesses lies in its approach to transparency. Unlike OpenAI’s ChatGPT o1-preview model, which hides its internal reasoning, DeepSeek R1 displays its thought process step by step. According to Kela’s researchers, this openness makes the model more interpretable, but it also exposes its vulnerabilities, allowing attackers to manipulate its responses more easily.

Concerns Over Bias and Censorship

In addition to security issues, DeepSeek R1 has also raised censorship concerns. When Forbes tested the model with politically sensitive questions—such as those about China’s alleged human rights violations against Uyghurs, Taiwan’s sovereignty, the 1989 Tiananmen Square massacre, and criticisms of Chinese leader Xi Jinping—the AI refused to answer. Instead, it gave the same generic response every time:

“Sorry, I’m not sure how to approach this type of question yet. Let’s chat about math, coding, and logic problems instead!”

At the time of publication, DeepSeek had not responded to requests for comment regarding these issues.

With its low cost, high performance, and weak safeguards, DeepSeek R1 could pose a significant cybersecurity risk if left unchecked. Its handling of sensitive topics also raises concerns about AI censorship and state control over information.

COMMENTARY:

The emergence of DeepSeek R1 highlights two major concerns: its potential for misuse in cybercrime and the increasing problem of AI-driven censorship. While the model’s ability to perform at a high level for a low cost is impressive, the fact that it can be so easily exploited makes it a dangerous tool in the wrong hands. At the same time, its refusal to engage with politically sensitive topics raises serious questions about the role of AI in suppressing free speech and controlling information.

One of the most alarming aspects of DeepSeek R1 is its vulnerability to generating malicious code. Cybercriminals have historically relied on complex hacking techniques, but now, AI models like DeepSeek make it far easier for even less experienced individuals to launch attacks. The ability to generate ransomware, steal credit card data, and even suggest illegal transactions means that AI is effectively democratizing cybercrime, making it more accessible to bad actors around the world.

This is not just a theoretical risk. Security researchers have already demonstrated that DeepSeek R1 can be manipulated to generate malware and other harmful tools. If a small team of ethical hackers can do this, imagine what sophisticated cybercriminal organizations—or even hostile governments—could achieve with unfettered access to this technology. The consequences could be devastating, with potential attacks targeting businesses, financial institutions, and even critical infrastructure.

What makes this situation worse is that the AI industry has largely ignored proper safety measures. While American AI companies like OpenAI, Google, and Microsoft have implemented safeguards—albeit imperfect ones—DeepSeek R1 appears to have been released without serious consideration for security risks. A model that openly reveals its reasoning process may seem like a good idea in theory, but in practice, it makes jailbreaking and manipulation much easier.

Beyond the cybersecurity threats, the censorship problem posed by DeepSeek R1 is equally troubling. AI censorship is becoming a global issue, where AI models refuse to engage with legitimate political or historical discussions, even when no laws are being broken. DeepSeek’s response to politically sensitive topics, such as the Tiananmen Square massacre, Uyghur human rights abuses, and Taiwan’s sovereignty, shows that it is designed to suppress information that the Chinese government considers inconvenient.

This is not an isolated case. AI censorship is spreading beyond China, with even American AI models refusing to discuss certain topics that are neither illegal nor unethical. In the United States, AI models have been observed suppressing certain political viewpoints, restricting discussion on controversial historical events, and even refusing to answer questions that challenge mainstream narratives. This is a dangerous precedent because it allows those who control AI models to control the flow of information.

Free speech is one of the foundational principles of a democratic society, and while private companies have the right to moderate content, there is a growing concern that AI-driven censorship is going too far. When AI models refuse to discuss certain political views, they are not just filtering out hate speech or illegal activity—they are shaping the public discourse in ways that can be biased, unfair, and politically motivated.

One of the most insidious aspects of AI censorship is that it gives the illusion of objectivity. If an AI model refuses to answer a political question, many users may assume that the topic is either unimportant or not worth discussing. Over time, this can erase critical debates from public discourse, effectively reprogramming society’s understanding of history, politics, and culture.

In a free society, people should have the right to access all available information and make their own conclusions. AI censorship removes this ability by pre-filtering information based on what companies—or governments—deem acceptable. This is particularly concerning when AI models are trained using biased datasets that reflect the interests of those who create them rather than an objective assessment of facts.

Even in the United States, AI censorship is limiting discussions on topics that are not illegal or unethical. Some users have reported that AI models refuse to discuss certain political ideologies, historical events, or even cultural controversies. While it is reasonable to restrict genuinely harmful content, broad censorship policies that suppress political speech create an echo chamber effect, where only one side of the argument is heard.

AI should not be an instrument for governments or corporations to manipulate public perception. The fact that DeepSeek R1 refuses to acknowledge certain historical events is a clear indication that AI can be used as a propaganda tool. If this trend continues, AI will no longer be a tool for knowledge and discovery but rather an enforcement mechanism for political and ideological conformity.

The long-term effects of AI censorship could be devastating. If AI models become the primary way people access information, and these models systematically exclude certain viewpoints, the result will be a generation of people who only receive one-sided narratives. This is not just a problem in authoritarian regimes like China—it is a risk everywhere AI is used to filter information.

To combat this, there needs to be greater transparency and accountability in AI development. Users should know what is being censored, why it is being censored, and who is making these decisions. AI models should not have the power to silence political debates, erase historical events, or decide which viewpoints are “acceptable”.

While cybersecurity risks like DeepSeek R1’s malware vulnerabilities are dangerous, AI-driven censorship poses an even greater long-term threat. If AI becomes a tool for information control, societies risk losing their ability to think critically, challenge authority, and engage in meaningful debates. The fight against AI censorship is not just about free speech—it is about ensuring that the future of technology serves the people, not those in power.

ARTICLE:

https://www.forbes.com/sites/thomasbrewster/2025/01/28/the-wiretap-deepseek-turned-into-evil-malware-maker-researchers-find/


Discover more from Free News and Commentary Today

Subscribe to get the latest posts sent to your email.

Write Me Back By Commenting And Sharing Your Opinions

Discover more from Free News and Commentary Today

Subscribe now to keep reading and get access to the full archive.

Continue reading

Verified by MonsterInsights