A critical remote code execution vulnerability has been discovered in expr-eval, a widely used JavaScript library for mathematical expression evaluation and natural language processing.
The vulnerability, tracked as CVE-2025-12735, poses significant risks to server environments and AI-powered applications that process user input.
Vulnerability Overview
The library’s widespread adoption makes this vulnerability particularly concerning for organizations running NLP and AI applications in production environments.
According to the SSVC framework, this vulnerability represents a Technical Impact of Total, meaning adversaries gain complete control over the software’s behavior or achieve total disclosure of all system information.
| Identifier | Value |
|---|---|
| CVE ID | CVE-2025-12735 |
| GitHub Advisory | GHSA-jc85-fpwf-qm7x |
| CERT/CC Note | VU#263614 |
| Disclosure Date | November 7, 2025 |
| Last Updated | November 9, 2025 |
The vulnerability stems from a design flaw in the Parser class’s evaluate() method. An attacker can exploit this flaw by defining arbitrary functions within the parser’s context object.
By crafting malicious payloads from user-controlled input, an attacker can execute system-level commands on the host system.
This could lead to unauthorized access to sensitive local resources, data exfiltration, or complete system compromise.
The flaw allows attackers to bypass security restrictions and gain total control over affected applications.
Organizations using expr-eval should immediately audit their dependencies and prioritize patching. Two primary remediation paths are available:
Patch via Pull Request #288: Apply the security patch from the expr-eval repository. The patch introduces a defined allowlist of safe functions, mandatory registration mechanisms for custom functions, and updated test cases to enforce these constraints.
Upgrade to patched versions: Update to the latest patched version of expr-eval or expr-eval-fork. Notably, expr-eval-fork v3.0.0 is now available and addresses this vulnerability, along with a prior Prototype Pollution vulnerability that remained unaddressed in the unmaintained original repository.
Use automated tools like npm audit to identify affected versions across your infrastructure.
Since the library is fundamental to many AI and NLP systems, implementing this fix quickly is essential before exploitation becomes widespread.
Implement updates as soon as patches are deployed to production systems.
Security researcher Jangwoo Choe responsibly disclosed the issue, working with GitHub Security and npm on coordinated disclosure to ensure responsible reporting and adequate time for fixes.
Cyber Awareness Month Offer: Upskill With 100+ Premium Cybersecurity Courses From EHA's Diamond Membership: Join Today