Cato Networks has unveiled a groundbreaking yet alarming discovery in its 2025 Cato CTRL Threat Report, detailing a novel method to bypass the security controls of popular generative AI (GenAI) tools like DeepSeek, Microsoft Copilot, and OpenAI’s ChatGPT.
This technique, dubbed “Immersive World,” allows even individuals with no prior malware coding experience to trick these AI models into creating fully functional malware capable of stealing login credentials from Google Chrome.
The Immersive World Technique
The “Immersive World” technique involves creating a detailed fictional scenario where each GenAI tool is assigned roles and tasks within a narrative framework.
By doing so, the researcher successfully bypassed the security controls of these AI models, effectively normalizing restricted operations.
This method exploits the AI’s ability to understand and engage with complex narratives, allowing it to generate malware under the guise of legitimate activities within the fictional world.
The researcher created a story set in a world called Velora, where malware development was portrayed as a form of art, thereby circumventing the ethical and legal boundaries typically enforced by these AI tools.
Implications and Risks
The implications of this discovery are profound, as it highlights the vulnerability of GenAI tools to sophisticated social engineering tactics.
The ability to create malware without needing extensive coding knowledge significantly lowers the barrier to entry for cybercrime, enabling what Cato Networks terms “zero-knowledge threat actors.”
This shift in the threat landscape poses significant risks to organizations, as it means that attacks can now be launched by individuals with minimal technical expertise, using off-the-shelf tools.
The findings underscore the need for proactive and comprehensive AI security strategies to mitigate these risks.
As AI adoption continues to grow across industries, so too does the potential for misuse.
Cato Networks emphasizes the importance of implementing better safeguards to prevent the exploitation of GenAI tools for malicious purposes.
The report serves as a wake-up call for CIOs, CISOs, and IT leaders to reassess their security measures in light of these emerging threats.