What Happened
Security researchers disclosed three vulnerabilities in LangChain and LangGraph, two of the most widely adopted open-source frameworks for building large language model applications and AI agent pipelines. The flaws expose distinct categories of sensitive data and affect software with a combined weekly download count exceeding 84 million packages.
The first vulnerability, CVE-2026-34070 (CVSS 7.5), is a path traversal flaw in LangChain’s prompt-loading function located in langchain_core/prompts/loading.py. The function lacks input validation, allowing an attacker to read arbitrary files from the host filesystem by manipulating the prompt load path. This exposes Docker configuration files, cloud credential files, application secrets stored in the local filesystem, and any other file accessible to the process running the LangChain application.
The second, CVE-2025-68664 (CVSS 9.3), is an unsafe deserialization vulnerability. The LangChain dumps() and dumpd() serialization functions failed to properly escape user-controlled input containing the reserved ‘lc’ key. An attacker able to inject an ‘lc’-keyed object into a LangChain orchestration loop — achievable through prompt injection in many pipeline configurations — triggers instantiation of an arbitrary object, enabling exfiltration of API keys and environment secrets.
The third, CVE-2025-67644, is a SQL injection flaw in LangGraph’s SQLite checkpoint implementation. An attacker manipulating metadata filter keys can run arbitrary SQL queries against the conversation history database, exposing all stored agent interactions and any sensitive data passed through LangGraph workflows. Patches are available: langchain-core 1.2.22 addresses CVE-2026-34070, versions 0.3.81 and 1.2.5 patch CVE-2025-68664, and langgraph-checkpoint-sqlite 3.0.1 resolves CVE-2025-67644.
Why This Matters for Canadian Organizations
LangChain and LangGraph are the dominant frameworks for building production AI agent systems in Canadian enterprises, startups, and research institutions. Any organization that has deployed customer-facing AI assistants, internal knowledge retrieval tools, automated code review pipelines, or document analysis workflows using these frameworks is potentially affected.
The risk here is not theoretical. CVE-2025-68664, the deserialization flaw with a CVSS of 9.3, is triggerable through prompt injection — a class of attack where a user provides crafted input to an AI system that causes the system to behave in unintended ways. Prompt injection is not just a research-class vulnerability; it has been demonstrated against production LLM deployments. In a LangChain application with public-facing user input, an attacker does not need access to the underlying server to trigger this chain. They send a message to the AI interface and the deserialization runs inside the application backend.
The data categories exposed by these vulnerabilities are high-sensitivity. API keys and environment secrets (CVE-2025-68664) directly enable lateral movement and service impersonation. Conversation history databases (CVE-2025-67644) often contain information users shared with an AI assistant expecting confidentiality, including names, business details, internal process information, and in some deployments, personal health or financial data. Canadian organisations subject to PIPEDA or provincial privacy legislation face breach notification obligations if those records are accessed without authorisation.
Canadian AI startups and enterprises building on these frameworks as part of cloud-deployed services also face downstream customer liability exposure if those services are exploited. The combination of a high download volume and frameworks sitting at the core of agentic AI pipelines makes these vulnerabilities a meaningful supply chain risk for the broader Canadian AI ecosystem.
What to Do
Update langchain-core to version 1.2.22 or later, langchain to 0.3.81 or 1.2.5 or later, and langgraph-checkpoint-sqlite to 3.0.1 or later. Audit your application pipelines to identify all prompt-loading functions and confirm they do not accept user-controlled file paths. Review any LangChain application that accepts untrusted user input for prompt injection exposure — this is particularly important for CVE-2025-68664 where the exploit chain runs through normal user interaction. Examine LangGraph checkpoint databases for unexpected query patterns or anomalous access. If any application handles personal data belonging to Canadians, assess whether a privacy breach notification obligation exists under PIPEDA and consult with your privacy team accordingly.
Source: The Hacker News

