The Vulnerability
CVE-2025-59528 is a code injection flaw in the CustomMCP node of Flowise, the open-source low-code platform widely used to build AI agent workflows and Model Context Protocol (MCP) integrations. The vulnerability carries a CVSS score of 10.0 โ the highest possible โ reflecting that exploitation requires no authentication, no user interaction, and results in full system compromise.
The flaw arises because the CustomMCP node accepts a user-provided mcpServerConfig parameter to build MCP server connection configurations, but executes the resulting JavaScript without any sanitisation or sandboxing. An attacker who can reach the Flowise API โ which in many deployments is internet-accessible with no authentication layer โ can inject arbitrary JavaScript that runs in the context of the Flowise server process.
What makes this particularly dangerous in practice is what that process typically has access to:
- LLM provider API keys (OpenAI, Anthropic, Mistral, etc.) stored in environment variables
- Database connection strings for vector stores and backend databases
- Cloud provider credentials used to access storage, functions, or hosted model endpoints
- Downstream service tokens for integrations wired into the workflow builder
Exploitation payloads observed in the wild specifically target these credential stores, exfiltrating discovered secrets to attacker-controlled infrastructure.
Scale of Exposure
Scanning data collected by researchers shows over 12,000 publicly accessible Flowise instances that remain on vulnerable versions. This number has persisted for weeks despite the vulnerability being known โ a pattern consistent with the AI tooling ecosystem, where development-speed teams often run open-source platforms without the same patch governance applied to production infrastructure.
A Metasploit module for CVE-2025-59528 was merged into the frameworkโs main branch, significantly lowering the technical barrier for exploitation. Attacks no longer require bespoke tooling.
Who Is at Risk
Flowise deployments are particularly concentrated in:
- AI development teams and startups building LLM-powered applications
- Enterprise innovation programmes piloting AI workflow automation
- Consultancies and managed service providers operating Flowise on behalf of clients
Many of these deployments run on cloud infrastructure with broad IAM permissions โ meaning a compromised Flowise instance may provide a foothold into cloud environments far beyond the application itself.
Self-hosted instances with no authentication (the default Flowise configuration) are trivially exploitable. Instances behind authentication middleware are only protected if that middleware is correctly implemented and not bypassable.
Recommended Actions
Immediate:
- Identify all Flowise instances in your environment โ including those run by development teams, data science groups, and on shared cloud infrastructure
- Upgrade to version 3.0.6 or later (3.1.1 is the current recommended release); the vulnerability was introduced in version 2.x and persists through 3.0.5
- Rotate all credentials accessible from the Flowise environment: LLM API keys, database credentials, cloud service tokens, and any secrets present as environment variables on the host
- Review audit logs for unexpected outbound network connections from Flowise hosts โ exfiltration payloads typically POST to external endpoints
Structural controls:
- Place Flowise behind authentication middleware (Flowise supports basic auth; for production use a reverse proxy with proper identity controls)
- Run Flowise in a network-isolated environment with egress filtering โ it should not have unrestricted outbound internet access
- Audit IAM permissions on the cloud account hosting Flowise; remove permissions that are not required for the applicationโs function
- Treat any AI platformโs environment as potentially hostile โ do not store production credentials in the same context as experimental AI workloads
Broader Context
CVE-2025-59528 is the third CVSS 10.0 vulnerability in the AI tooling ecosystem in the past 18 months. The pattern is consistent: AI platforms are developed rapidly with a focus on capability over security hardening, are often deployed internet-accessible to enable collaboration, and sit on top of extensive credential stores that make them attractive targets. Security teams that have not yet audited their AI tool inventory should treat this as an urgent prompt to do so.