All news

LiteLLM Pre-Auth SQLi CVE-2026-42208 Exploited

LiteLLM Pre-Auth SQLi CVE-2026-42208 Exploited

Hackers are actively exploiting a critical LiteLLM pre-auth SQL injection flaw. Learn how CVE-2026-42208 works and how to protect your AI gateway now.

April 28, 2026VibeWShield News Agentbleepingcomputer.com
Editorial note: This article was generated by VibeWShield's AI news agent based on the original report. It has been reviewed for accuracy but may contain AI-generated summaries. Always verify critical details from the original source.

Attackers started exploiting CVE-2026-42208, a critical pre-authentication SQL injection vulnerability in LiteLLM, roughly 36 hours after its public disclosure on April 24. The LiteLLM pre-auth SQLi flaw lets anyone with network access to your proxy pull API keys, provider credentials, and config secrets from the database. No login required. If your LiteLLM instance is internet-exposed and running a version older than 1.83.7, assume it has been targeted.

How the LiteLLM SQL Injection Vulnerability Works

The bug lives in LiteLLM's proxy API key verification step. When a request hits any LLM API route, the proxy checks the Authorization header before authentication completes. The vulnerable code builds SQL queries through string concatenation rather than parameterized queries, which means a crafted Authorization: Bearer value gets interpreted as SQL.

An attacker sends a malicious bearer token to an endpoint like /chat/completions. That payload lands directly in a SQL query. From there, the attacker can read or modify rows in the proxy's database without ever holding a valid credential. The fix in version 1.83.7 replaces the concatenation with parameterized queries, which prevents the injected SQL from being interpreted as code.

What Attackers Are Actually Targeting

Sysdig researchers documented two distinct phases in the observed attacks. In the first phase, attackers sent broad injection payloads to map the database schema, querying table names and structures. In the second phase, they switched IP addresses and reran more precise, targeted payloads against the specific tables identified in phase one.

The tables they went for contained OpenAI, Anthropic, and Bedrock provider credentials, virtual API keys, master keys, and environment/config data. Sysdig noted zero probing against benign or irrelevant tables. The attacker went directly to where secrets are stored, which suggests prior knowledge of LiteLLM's internal schema or rapid reconnaissance based on the open-source codebase.

LiteLLM is used by tens of thousands of developers as a unified middleware layer for calling multiple LLM providers through a single API. With 45,000 GitHub stars and 7,600 forks, the attack surface is significant. Compromised provider credentials could translate into runaway API costs, data exfiltration, or pivoting into connected infrastructure.

Impact for Developers Running LiteLLM Proxies

Any LiteLLM instance running below version 1.83.7 with a publicly reachable proxy endpoint is at direct risk. The stolen credentials give attackers access to every AI provider account your proxy manages. That means unauthorized model usage billed to your accounts, potential access to prompts and completions passing through the proxy, and a foothold for further attacks if those credentials are reused elsewhere.

LiteLLM has also been a recent supply-chain target. The TeamPCP group distributed malicious PyPI packages impersonating LiteLLM dependencies to deploy infostealers. Combine that with this active SQLi exploitation and the threat picture for LiteLLM deployments is clearly elevated right now.

How to Fix and Protect Your LiteLLM Deployment

Upgrade to LiteLLM 1.83.7 or later immediately. This is the only complete fix.

If upgrading right now is not possible, add the following to your LiteLLM config under general_settings:

general_settings:
  disable_error_logs: true

This workaround blocks the path through which malicious inputs reach the vulnerable query. It is a mitigation, not a patch.

Beyond the immediate fix, take these steps:

  • Rotate every virtual API key, master key, and provider credential stored in your LiteLLM instance
  • Audit logs for requests to /chat/completions with unusual Authorization headers
  • Place your LiteLLM proxy behind a network boundary or VPN if it does not need to be publicly accessible
  • Run a vulnerability scan on your exposed endpoints to confirm no other injection surfaces are present

You should also check your PyPI dependencies if you installed LiteLLM recently, given the supply-chain activity targeting this project.

FAQ

How do I know if my LiteLLM instance was already compromised? Check your access logs for POST requests to /chat/completions with malformed or unusually long Authorization: Bearer values, especially between April 24 and now. Treat any internet-exposed instance below version 1.83.7 as potentially compromised and rotate all credentials regardless.

Does the workaround fully protect against CVE-2026-42208? No. Setting disable_error_logs: true removes one of the paths that the injection travels through, but it is not equivalent to the parameterized query fix in 1.83.7. Upgrade as soon as operationally possible.

Is LiteLLM running behind a reverse proxy still vulnerable? If the /chat/completions endpoint or any LLM API route is reachable from the internet, yes. The vulnerability triggers at the application layer before authentication, so TLS termination or a reverse proxy does not prevent exploitation.


Scan your LiteLLM proxy and API endpoints for SQL injection and other critical vulnerabilities at vibewshield.com/scan.

Free security scan

Is your app vulnerable to similar attacks?

VibeWShield automatically scans for these and 18 other security checks in under 3 minutes.

Scan your app free