Blog

LiteLLM Compromise

March 25, 2026
Rohit Kashibatla

From Trivy to LiteLLM: Inside TeamPCP’s LLM Supply Chain Attack

TL;DR:

On March 24, 2026, threat actor TeamPCP backdoored the popular litellm Python package by stealing its PyPI publish credentials through a compromised Trivy GitHub Action in LiteLLM's CI pipeline. Two malicious versions (1.82.7 and 1.82.8) were live on PyPI for roughly three hours, carrying a three-stage payload designed to harvest cloud credentials, SSH keys, Kubernetes secrets, and cryptocurrency wallets — then exfiltrate them to an attacker-controlled domain. Any environment that installed either version during that window should be treated as compromised, credentials rotated, and affected hosts rebuilt from clean images.

What Happened to LiteLLM

On March 24, 2026, the popular litellm Python package — a lightweight router and gateway for LLM APIs — was briefly turned into a credential‑stealing backdoor.

Two malicious releases (1.82.7 and 1.82.8) were pushed to PyPI using the project maintainer’s own credentials, which had been stolen days earlier via a compromised Trivy GitHub Action in LiteLLM’s CI pipeline.

In just a few hours of exposure, that poisoned package sat at the center of an LLM ecosystem that sees millions of downloads per day, with code designed to harvest cloud, Kubernetes, CI/CD, and even crypto wallet secrets from any environment where Python happened to start.

The two versions used different delivery mechanisms.

  • Version 1.82.7 injected a base64‑encoded payload directly into litellm/proxy/proxy_server.py, so the stealer executed whenever the proxy component was imported.
  • Version 1.82.8 went further by adding litellm_init.pth to site‑packages, abusing Python’s startup hook mechanism to run attacker code on every interpreter startup — including pip, one‑off python -c runs, and IDE helpers — without any explicit import litellm.

What The Payload Did

The LiteLLM payload followed the same three‑stage pattern seen in the Trivy and Checkmarx KICS compromises.

Stage 1 – Collection

The code performs a broad sweep of the host for:

  • System metadata and logs: hostname, environment variables, processes, routing tables, and /var/log/auth.log.
  • Credentials: SSH private keys, .env files, Git credentials, API keys from shell history, Slack/Discord webhooks, and password hashes from /etc/shadow.
  • Cloud secrets: AWS, GCP, and Azure credentials, plus direct queries to cloud instance metadata and, where possible, AWS Secrets Manager and SSM Parameter Store.
  • Container/Kubernetes data: Docker registry creds, kubeconfigs, service account tokens, cluster secrets.
  • Cryptocurrency artifacts: wallet files and seed phrases for major chains.

Stage 2 – Encryption and exfiltration

Collected data is bundled into tpcp.tar.gz, encrypted using a hybrid AES‑256/RSA‑4096 scheme and POSTed to https://models.litellm.cloud/, a look‑alike domain registered shortly before the LiteLLM poisoning. The RSA key used here matches the key in the Trivy and KICS payloads, providing strong technical linkage across the campaign.

Stage 3 – Persistence and lateral movement

On hosts where it gains foothold, the payload:

  • Drops a Python backdoor at ~/.config/sysmon/sysmon.py (or the root equivalent) and wires it into a systemd user service named “System Telemetry Service” (sysmon.service).
  • Configures the backdoor to poll https://checkmarx.zone/raw every few minutes for new payload URLs, then fetch and execute them.
  • In Kubernetes, searches for service account tokens, enumerates all secrets, and deploys privileged alpine pods named node-setup-{node_name} in kube-system, mounting host filesystems to install the same sysmon backdoor on each node.

The .pth‑based loader in 1.82.8 also created a fork‑bomb effect when it spawned a Python subprocess that in turn triggered the .pth hook again, which is how the initial discoverer noticed the compromise — their machine’s RAM cratered while debugging a Cursor plugin. That side effect doesn’t reduce the risk; it just gave defenders a noisy early warning.

The Attack Chain: From Trivy to LiteLLM

LiteLLM wasn’t the starting point; it was phase nine in a broader TeamPCP campaign that has been marching through cloud‑native security tooling since late February. The pivot into LiteLLM came through Aqua Security’s Trivy ecosystem, which TeamPCP had already compromised via a pull_request_target workflow bug that let them exfiltrate the aqua-bot GitHub token and push a malicious v0.69.4 release.

By March 19, the attackers controlled the trivy-action GitHub Action tags and used them to deliver a credential‑stealing payload to any project that pulled the action without pinning to a commit SHA. LiteLLM’s CI pipeline was one of those consumers: its build used Trivy via apt with loose versioning, so when a March 24 build ran, the compromised scanner executed inside the GitHub Actions runner and stole the PYPI_PUBLISH token used for litellm releases.

Armed with valid publisher credentials, TeamPCP didn’t need typosquatting or man‑in‑the‑middle tricks; they simply uploaded new official versions to PyPI. That distinction matters for supply‑chain posture: every traditional integrity check (hashes, TLS, package name) looked clean, because the integrity being preserved was for the attacker’s own artifacts.

What Defenders Should Do Next

  1. Scope and detection
    • Enumerate all installations of litellm and flag any use of versions 1.82.7 or 1.82.8 across dev, CI, staging, and production. use pip show litellm, inspect your uv caches (e.g., find ~/.cache/uv -name "litellm_init.pth")
    • On any host where those versions may have been installed, hunt for LiteLLM‑campaign artifacts:
      • litellm_init.pth in site‑packages, especially if it contains base64‑encoded Python and subprocess calls.
      • If your environments may have pulled litellm 1.82.7 or 1.82.8 while they were live on PyPI, treat those systems as potentially compromised.
      • ~/.config/sysmon/sysmon.py or /root/.config/sysmon/sysmon.py and ~/.config/systemd/user/sysmon.service with the description “System Telemetry Service”
      • Temporary files in /tmp: tpcp.tar.gz, session.key, payload.enc, session.key.enc, .pg_state, pglog
      • Kubernetes pods named node-setup-* in the kube-system namespace, especially with hostPath mounts or privileged flags
    • In network telemetry, look for egress to:
      • models.itellm.cloud (exfiltration endpoint).
      • checkmarx.zone/raw (C2 polling)
  2. Containment and cleanup
    • Immediately uninstall litellm 1.82.7/1.82.8 wherever found and pin to a known‑good version (for example, 1.82.6) or remove the dependency until you have a remediation plan.
    • Purge caches: Remove your package manager’s cache (rm -rf ~/.cache/uv or pip cache purge) to ensure dependencies aren’t reinstalled from cached wheels.
    • On any host showing campaign artifacts or suspicious egress, assume credential theft and persistent foothold; rebuild from clean images instead of trying to surgically clean the system.
    • In Kubernetes, cordon and drain affected nodes, remove malicious node-setup-* pods and any “System Telemetry Service” backdoors on the underlying hosts, then rotate cluster secrets.
  3. Credential rotation
    • Rotate credentials that LiteLLM instances or their host environments could access:
      • API keys and tokens stored in LiteLLM configuration (LLM provider keys, proxy keys, etc.).
      • Cloud IAM keys on the host and any long‑lived tokens pulled from metadata services.
      • SSH keys, Git credentials, CI/CD tokens, and registry credentials present on the same machines or runners.
    • Prioritize high‑privilege and widely reused credentials first, such as cloud admin keys and shared CI tokens.
  4. Hardening specifically around LiteLLM going forward
    • Treat LiteLLM as a high‑value component: run it on hardened hosts with minimal additional tooling and least‑privilege IAM, not as a general‑purpose utility library on arbitrary machines.
    • Keep LiteLLM versions pinned and monitored, and alert on unexpected version changes or the appearance of .pth startup hooks in its installation directory.
    • For CI/CD images that build or test LiteLLM‑based services, restrict outbound egress, scope secrets tightly and make them short‑lived, and monitor for connections to suspicious domains from runners

Indicators of Compromise

Packages / versions

litellm 1.82.7 (backdoored proxy_server.py)
litellm 1.82.8 (adds .pth startup hook).

Malicious file hashes

litellm_init.pth (LiteLLM 1.82.8)
SHA‑256: 71e35aef03099cd1f2d6446734273025a163597de93912df321ef118bf135238

stage 1
SHA‑256: d6fc0ff06978742a2ef789304bcdbe69a731693ad066a457db0878279830d6a9

1.82.7 .whl
SHA‑256: 8395c3268d5c5dbae1c7c6d4bb3c318c752ba4608cfcd90eb97ffb94a910eac2

1.82.8 .whl
SHA‑256: d2a0d5f564628773b6af7b9c11f6b86531a875bd2d186d7081ab62748a800ebb

proxy_server.py (LiteLLM 1.82.7)
SHA‑256: a0d229be8efcb2f9135e2ad55ba275b76ddcfeb55fa4370e0a522a5bdee0120

Filesystem artifacts

  • Python startup hook: .../site-packages/litellm_init.pth containing a base64‑encoded Python payload that spawns a new python subprocess.
  • Local persistence
    • Backdoor script
      ~/.config/sysmon/sysmon.py/root/.config/sysmon/sysmon.py
    • Systemd user service:
      ~/.config/systemd/user/sysmon.service
      Description: System Telemetry Service
  • Temporary / staging files
/tmp/tpcp.tar.gz

/tmp/session.key

/tmp/payload.enc

/tmp/session.key.enc

/tmp/.pg_state

/tmp/pglog
These names (especially tpcp.tar.gz) are reused across the Trivy, KICS, and LiteLLM phases of the TeamPCP campaign

Network IOCs

JARM Fingeprint 27d40d40d00040d00042d43d000000d2e61cae37a985f75ecafb81b33ca523

These domains are strong pivots for hunting in firewall, proxy, and egress logs around the time LiteLLM 1.82.7/1.82.8 were present.

Exfiltration endpoint: HTTP POST containing encrypted tpcp.tar.gz
models.litellm.cloud
<https://models.litellm.cloud/>

C2 / tasking: HTTP GET used to pull additional payload URLs
checkmarx.zone
<https://checkmarx.zone/raw>

Kubernetes artifacts

On clusters where the payload executed with sufficient permissions, look for:

  • Pods in kube-system named: node-setup-<node_name>

These pods are used to mount the node filesystem and drop the sysmon.py backdoor and corresponding systemd service onto each node.

Note:

To prioritize investigation around the IOC above: Focus on node-setup-* pods in kube-system that also:

  • Run as privileged.
  • Have hostPID and/or hostNetwork set to true.
  • Use hostPath mounts to / or other host‑level paths

These traits aren’t unique to this campaign, but in combination with the node-setup-* naming and timing around the LiteLLM compromise they are strong leads for deeper investigation.

Preventing Supply Chain Attacks with Gold Open Source

Attacks like this one exploit a simple fact: public registries cannot guarantee that a legitimate package stays legitimate. Lineaje's Gold Open Source packages are hardened, continuously monitored distributions of popular open source components — providing an auditable supply chain baseline that eliminates this class of risk before it reaches production.

More on the blog