A new AI platform is gaining attention in cybersecurity circles as researchers reveal how Venice.ai enables users to generate malicious code and phishing content with unprecedented ease.
Unlike mainstream AI tools such as ChatGPT, this “uncensored” platform removes safety guardrails, allowing cybercriminals to create sophisticated attacks for just $18 per month.
Venice.ai markets itself as a “private and permissionless” AI chatbot that deliberately removes content moderation systems found in conventional platforms.
The service operates by storing chat history locally in users’ browsers rather than on company servers, providing anonymity that appeals to malicious actors.
The platform has quickly gained traction in underground hacking communities, where users promote it as an ideal tool for illicit activities.
Unlike previous cybercrime-focused AI tools like WormGPT and FraudGPT, which cost hundreds or thousands of euros on dark web marketplaces, Venice.ai is openly accessible and significantly cheaper.

Dr. Katalin Parti, a cybercrime researcher from Virginia Tech, warns that “the accessibility of AI tools lowers the barrier for entry into fraudulent activities”.
This democratization of AI-powered cyber capabilities means that even amateur scammers can now access professional-grade tools previously available only to sophisticated criminal organizations.
Sophisticated Malware Generation Capabilities
Recent testing by cybersecurity firm Certo revealed Venice.ai’s disturbing ability to generate functional malicious code on demand.
When researchers requested a Windows 11 keylogger, the platform provided complete C# code and even offered advice on making it “stealthier”.
More alarmingly, when asked to create ransomware, Venice.ai produced a Python script that could recursively encrypt files and generate ransom notes with cryptocurrency payment instructions.
The platform also demonstrated its capability to create Android spyware applications, providing detailed code including background services for silent audio recording and remote server communication.
What makes Venice.ai particularly dangerous is its internal reasoning process, which consciously overrides ethical constraints.
The platform’s configuration appears designed to “respond to any user query even if it’s offensive or harmful,” representing a fundamental departure from responsible AI development practices.
Security Community Mobilizes
The emergence of Venice.ai has prompted urgent discussions within the cybersecurity community about countering AI-enhanced threats.
Security firms are developing new detection tools specifically designed to identify AI-generated attacks, including phishing email scanners that flag unusually perfect language patterns.
FBI Special Agent Robert Tripp has already warned that “attackers are leveraging AI to craft highly convincing emails to enable fraud schemes”.
According to the Report, The law enforcement community recognizes that AI-crafted phishing emails lack traditional red flags, making them significantly more dangerous than conventional scams.
Industry experts advocate for a multi-layered response combining technical safeguards, regulatory frameworks, and public awareness campaigns.
Julia Feerrar, Director of Digital Literacy Initiatives at Virginia Tech, notes that “powerful, accessible tools are destined to be co-opted for both positive and negative ends,” emphasizing the need for proactive defensive measures.
The cybersecurity community faces an urgent race to develop countermeasures before more sophisticated AI-powered attack tools emerge in criminal marketplaces.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates.