The digital workplace is undergoing a dramatic transformation as generative AI tools, such as ChatGPT, and enterprise SaaS solutions become central to productivity; however, this shift has also introduced new, poorly understood risks.
The latest telemetry report drawn from real enterprise browsing data exposes a startling trend: GenAI tools have overtaken classic SaaS applications as the primary conduit for sensitive corporate data leaving the organization.
Nearly half of all employees within the observed environments regularly interact with GenAI platforms, and an alarming 40% of file uploads contain regulated data, such as PII or PCI information.
Despite security policies emphasizing file controls, attackers and employees alike have begun to favor file-less data movement, quickly copying and pasting sensitive information into AI chat prompts and SaaS workflows.
Telemetry revealed that 77% of employees paste confidential records, such as client contact lists, financial numbers, and source code snippets, directly into GenAI input fields.
The vast majority (82%) of these actions occur via accounts and device sessions not registered with enterprise identity management, effectively making this data invisible to security and compliance auditing systems.
Unlike traditional DLP, which monitors file transfers and attachments, file-less exchanges evade detection and leave minimal artifacts for investigation.
The risk is compounded by the rapid sharing of content across browser windows, remote desktops, and mobile apps that are beyond the reach of legacy endpoint protections.
Corporate Logins and Chat Apps Are Not Safer
Many businesses rely on Single Sign-On (SSO) mechanisms, assuming that official corporate credentials ensure security.
Yet, the report found that even sanctioned logins for CRM and ERP platforms are compromised by the widespread use of non-SSO access methods.
Unmanaged accounts often access critical platforms, blurring the distinction between legitimate and unauthorized activity.
Chat and IM applications, which are increasingly vital for real-time collaboration, create additional risk, with 87% of observed chat activity flowing through accounts outside of enterprise oversight.
The telemetry data shows 62% of users pasted sensitive data, such as customer information or business plans, directly into chat apps under unmanaged identities, bypassing all corporate logging and monitoring.
Rethinking DLP: GenAI and SaaS Demand New Controls
This emerging threat matrix demands a radical rethink of enterprise security strategies. The focus must shift from legacy file-centric DLP to dynamic controls that monitor browser-based data flow, copy/paste transactions, and unmanaged SaaS session activities.
Real-time telemetry capture and behavioral analysis become critical technical controls for detecting and preventing exfiltration attempts that escape file-level scrutiny.
Adaptive access policies should block risky paste or chat operations, and enterprises must expand monitoring to unmanaged GenAI and SaaS usage.
Only these measures can address new blind spots introduced by file-less interactions and external identities, protecting against accidental and intentional data leaks in the age of AI-enabled productivity.
Find this Story Interesting! Follow us on Google News , LinkedIn and X to Get More Instant Updates