EHA
Home AI AI-Generated Deepfake Videos Fuel the Next Wave of ‘Scam-Yourself’ Attacks

AI-Generated Deepfake Videos Fuel the Next Wave of ‘Scam-Yourself’ Attacks

0

The cybersecurity landscape is witnessing a sharp rise in AI-driven “Scam-Yourself” attacks, where victims are manipulated into compromising their own systems.

Recent investigations have uncovered a campaign utilizing AI-generated videos, deepfake technology, and malicious scripts to deceive users into executing harmful commands.

These attacks, which surged by 614% in 2024, represent a dangerous evolution in cybercrime tactics.

AI and Deepfake Technology Drive Sophisticated Cybercrime

The campaign begins with AI-crafted videos hosted on compromised or acquired verified YouTube accounts.

These channels, appearing legitimate due to prior authentic content, now feature deceptive tutorials.

One such video claims to unlock TradingView’s developer mode, promising AI-powered tools for financial growth.

These videos are designed to appeal to individuals seeking quick profits, making them more likely to follow the malicious instructions provided.

Deepfake Personas and AI Scripts Amplify Threats

At the heart of this operation is the use of deepfake technology to create convincing personas.

These AI-generated figures exhibit realistic facial expressions, voice synthesis, and body movements, leaving no identifiable real-world counterpart.

The attackers have also created multiple YouTube profiles using variations of these personas to amplify their reach.

Many of these accounts were either bought or hijacked, some dating back over a decade, and boast hundreds of thousands of subscribers.

The malicious scripts embedded in these campaigns are crafted using tools like ChatGPT.

While the core script remains consistent across variations, attackers adapt Command-and-Control (C&C) domains to evade detection.

Victims are instructed to execute commands manually, such as entering code via the Windows Run dialog box (Win+R), leading to the installation of malware like NetSupport or Lumma Stealer.

According to Gen, these tools grant attackers remote access and enable data theft, with cryptocurrency serving as an enticing lure for victims.

To drive traffic to these videos, attackers leverage malvertising promoting their content through sponsored ads on YouTube.

Tutorial video showing how to insert the (malicious) script into the command prompt

These ads target users interested in cryptocurrency or financial opportunities, ensuring high visibility among susceptible audiences.

The videos often feature misleading engagement metrics, such as inflated views and comments, further enhancing their credibility.

Attackers also employ evasive hosting strategies by shifting from mainstream platforms like Pasteco to lesser-known or attacker-controlled domains.

This adaptability underscores their commitment to avoiding detection while maintaining operational effectiveness.

Cybersecurity firms have responded by enhancing protective measures, such as Clipboard Protection features designed to counter clipboard-based scams.

However, the rapid advancement of deepfake and AI technologies poses significant challenges for detection and prevention efforts.

As these tactics evolve, distinguishing between genuine content and fraudulent schemes will become increasingly difficult.

This alarming trend highlights the importance of vigilance and robust cybersecurity practices.

Users are urged to verify sources before executing online instructions and remain cautious of unsolicited content promising quick financial gains.

With cybercriminals refining their methods, proactive security measures will be critical in combating this new wave of AI-powered threats.

Also Read:

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version