EHA
Home AI Hackers Exploiting DeepSeek & Qwen AI Models to Develop Malware

Hackers Exploiting DeepSeek & Qwen AI Models to Develop Malware

0

Cybersecurity researchers from Check Point Research (CPR) have uncovered a concerning trend in the cyber threat landscape: hackers are rapidly adopting newly launched AI models, DeepSeek and Qwen, to develop malicious content.

These platforms are being used to bypass restrictions and create malware, marking a shift away from previous reliance on ChatGPT.

The exploitation of these advanced AI tools highlights the evolving sophistication of cybercriminal activities and underscores the urgent need for stronger security measures.

While OpenAI’s ChatGPT has implemented robust anti-abuse mechanisms over the years, DeepSeek and Qwen appear to lack comparable safeguards, making them attractive to threat actors.

Cybercriminals are leveraging these models to create infostealers, bypass banking anti-fraud protections, and optimize spam distribution scripts.

Alarmingly, detailed guides on jailbreaking these AI systems methods to override their built-in restrictions are being openly shared online.

Jailbreaking Prompts

Techniques such as the “Do Anything Now” approach and other manipulation tactics are enabling hackers to exploit these tools with minimal technical expertise.

Real-World Examples of Malicious Use

One of the most troubling aspects of this development is the ease with which attackers can use DeepSeek and Qwen for harmful purposes.

For instance, infostealers malware designed to extract sensitive information are being generated using Qwen’s capabilities.

These tools are then shared across underground forums, lowering the barrier for less skilled attackers to engage in cybercrime.

Threat actors have been observed using DeepSeek to bypass banking system anti-fraud protections, potentially enabling large-scale theft.

Discussions among cybercriminals reveal strategies for exploiting vulnerabilities in financial systems through AI-driven methods.

Additionally, mass spam distribution has become more efficient as attackers use a combination of ChatGPT, Qwen, and DeepSeek to refine their scripts and troubleshoot issues in real time.

The rise of jailbreaking techniques further exacerbates the problem. By manipulating the AI models’ responses, hackers can generate uncensored or unrestricted content that facilitates malicious activities.

This includes creating phishing campaigns, malware code, and other harmful outputs that would otherwise be blocked by ethical safeguards in properly secured AI systems.

The Growing Risk of Unregulated AI Models

The increasing misuse of generative AI tools like DeepSeek and Qwen signals a dangerous shift in the cybersecurity landscape.

As these platforms gain popularity, uncensored versions are expected to proliferate across repositories on the internet, amplifying the risks for organizations and individuals alike.

The lack of robust anti-abuse mechanisms in these newer models makes them particularly vulnerable to exploitation by both experienced hackers and low-skilled attackers who rely on pre-existing tools and scripts.

Check Point Research emphasizes the critical need for proactive defenses against this emerging threat.

Organizations must prioritize security measures as they adopt generative AI technologies or risk exposing themselves to significant vulnerabilities.

With cybercriminals continuing to innovate through advanced AI tools, vigilance and investment in security infrastructure are more important than ever.

Also Read:

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version