A critical Remote Code Execution (RCE) vulnerability (CVE-2025-32434) has been identified in PyTorch, one of the most widely used open-source machine learning frameworks.
This flaw, discovered by security researcher Ji’an Zhou, undermines the safety of the torch.load() function even when configured with weights_only=True—A parameter long trusted to prevent unsafe deserialization.
Affecting PyTorch versions ≤2.5.1, the vulnerability carries a CVSS v4 score of 9.3, posing severe risks to systems relying on PyTorch for model deployment.
Technical Breakdown
The vulnerability stems from unsafe deserialization in PyTorch’s model-loading mechanism.
While weights_only=True, it was designed to restrict deserialization to tensors and primitive types, Zhou demonstrated that attackers could craft malicious model files to bypass these restrictions and execute arbitrary code during loading.
example:
python# Unsafe usage (even with weights_only=True)
model = torch.load("malicious_model.pt", weights_only=True)  # Triggers RCE[1][3]
This exploit leverages inconsistencies in PyTorch’s serialization validation, enabling attackers to inject payloads that compromise host systems.
The impact is particularly severe in cloud-based AI environments, where compromised models could facilitate lateral movement or data exfiltration.
Mitigation and Patching
The PyTorch team has released version 2.6.0 to address this issue. Users must immediately update their installations:
bash# Update via pip
pip install --upgrade torch==2.6.0
# Update via conda
conda update pytorch -c pytorch
For systems that cannot be updated immediately, the only viable workaround is to avoid using torch.load() with weights_only=True entirely. 
Alternative model-loading methods, such as using explicit tensor extraction tools, are recommended until the patch is applied.
Broader Implications
This vulnerability challenges long-standing security assumptions in machine learning workflows.
Key takeaways include:
- Trust in weights_only=Trueis broken: Developers widely believed this parameter ensured safe model loading, but the flaw reveals critical gaps in deserialization safeguards.
- AI supply chain risks: Malicious models distributed via public repositories (e.g., Hugging Face Hub) could exploit this vulnerability at scale.
- Urgency of patch adoption: With proof-of-concept exploits likely to emerge soon, delayed updates risk widespread system compromises.
Researcher Insights
Ji’an Zhou emphasized the paradox of the vulnerability: “Since everyone knows weights_only=False is unsafe, they use weights_only=True to mitigate security issues. But this bypass proves no configuration is inherently safe without rigorous validation”. 
The discovery underscores the need for continuous security audits in open-source AI frameworks, especially as they become infrastructure-critical.
CVE-2025-32434 serves as a stark reminder of the evolving threats in AI deployment ecosystems. Organizations using PyTorch must prioritize updating to version 2.6.0 and audit existing models for potential tampering.
As AI systems permeate industries, proactive vulnerability management—not just reactive patching—will be essential to safeguarding against increasingly sophisticated attacks.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates