On March 24, 2026, LiteLLM versions 1.82.7 and 1.82.8 were published to PyPI containing a credential-stealing backdoor. The package has 97 million monthly downloads.
Andrej Karpathy called it "software horror."
He's right.
What Happened
A threat actor called TeamPCP executed a cascading supply chain attack:
- March 19 — Compromised Aqua Security's Trivy vulnerability scanner
- March 23 — Used Trivy access to compromise Checkmarx's KICS GitHub Action
- March 24 — Used KICS CI/CD compromise to steal a LiteLLM maintainer's PyPI credentials
- March 24 — Published poisoned LiteLLM packages to PyPI
The payload is a .pth file (litellm_init.pth) that executes automatically on every Python process startup. You don't need to import LiteLLM. You don't need to call it. If it's installed, it runs.
What It Steals
Everything:
- SSH keys
- AWS, GCP, and Azure credentials
- Kubernetes configs and secrets
- Crypto wallets
.envfiles (all your API keys)- Git credentials
- Shell history
- SSL private keys
- CI/CD secrets
- Database connection strings
The stolen data is encrypted and exfiltrated to models.litellm.cloud — a lookalike domain designed to blend into network logs.
In Kubernetes environments, it deploys privileged pods to every node for lateral movement. It installs a persistent systemd backdoor.
Who's Affected
LiteLLM has ~3.4 million daily downloads. But the real danger is transitive dependencies. LiteLLM is pulled in by:
- CrewAI — Multi-agent framework
- DSPy — Programming framework for LLMs
- MLflow — ML experiment tracking (emergency-pinned to <=1.82.6)
- LangChain integrations — Various LangChain community packages
- Hundreds of MCP servers — Many use LiteLLM under the hood
One developer discovered the compromise only because Cursor IDE pulled it in through an MCP plugin. They never directly installed LiteLLM.
What To Do Right Now
Check if you're affected:
pip show litellm | grep Version
# If 1.82.7 or 1.82.8 — you're compromised
Search for the payload:
find / -name "litellm_init.pth" 2>/dev/null
Check for the backdoor:
ls ~/.config/sysmon/sysmon.py 2>/dev/null
ls ~/.config/systemd/user/sysmon.service 2>/dev/null
In Kubernetes:
kubectl get pods -n kube-system | grep node-setup
If you find any of these: rotate every credential immediately. SSH keys, cloud IAM, API keys, database passwords, everything.
CrewAI Already Dropped LiteLLM
CrewAI published a removal guide the same day, pushing native SDK integrations for OpenAI, Anthropic, Google Gemini, Azure, and Bedrock.
Their message: "Fewer packages = fewer supply-chain risks + better performance."
They're right. Every dependency is an attack surface.
The Bigger Problem
LiteLLM is the most popular open-source LLM proxy. It's what developers use to avoid vendor lock-in — route requests to any LLM through one interface.
But here's the thing: the tool you use to avoid lock-in just became the attack vector. The irony is brutal.
This isn't a one-off. TeamPCP executed three supply chain attacks in five days (Trivy → KICS → LiteLLM). A related CanisterWorm is spreading across 47 npm packages. The AI tooling supply chain is under active, coordinated attack.
Managed Platforms Don't Get pip-Installed
A managed LLM proxy like HexaClaw doesn't exist in your dependency tree. There's no package to compromise. There's no .pth file to inject. There's no transitive dependency chain.
You make an HTTPS request to an API endpoint. That's it. The routing, the provider connections, the credential management — all handled server-side, behind authentication, with no code running in your environment.
HexaClaw routes to 41 models across 8 providers (Anthropic, OpenAI, Google, DeepSeek, Mistral, Groq, Cohere, xAI) through one API key. Same multi-provider access that LiteLLM offered, without the 97-million-download attack surface.
# This is your entire "dependency"
curl -X POST https://api.hexaclaw.com/v1/chat/completions \
-H "Authorization: Bearer hx_your_key" \
-d '{
"model": "claude-sonnet-4-6",
"messages": [{"role": "user", "content": "Hello"}]
}'
No pip install. No package.json. No supply chain.
Anthropic Gets It
From Anthropic's engineering blog, published this week:
"An agent with three focused tools will outperform an agent with fifteen tools for any given task."
Fewer dependencies. Fewer tools. Fewer attack surfaces. The consolidation trend isn't just about performance — it's about security.
What This Changes
The LiteLLM attack will accelerate three trends:
- Framework vendors dropping LiteLLM — CrewAI already did. Expect DSPy and others to follow.
- Managed proxies over self-hosted — The security burden of self-hosting just became viscerally real.
- Dependency auditing becoming mandatory —
pip installis no longer a safe operation without verification.
If you're building AI agents in 2026, the days of casually pip-installing proxy libraries are over. Either audit every dependency, or use a managed service that keeps the routing server-side.
HexaClaw provides managed LLM routing to 41 models, plus image/video generation, browser automation, memory, and compute — all through one API key. No packages to install. No supply chain to worry about. Get started free — 1,000 credits, no credit card required.