Wunderlandmedia

pip install Trust-Me-Bro: Why AI Coding Assistants Can't Verify What They Install

AI coding assistants install packages from memory, not verified sources. They can't check if a package is safe, compromised, or even real.

Kemal EsensoyModified on April 25, 2026
pip install Trust-Me-Bro: Why AI Coding Assistants Can't Verify What They Install
Artificial Intelligence

I sat there watching Claude Code do its thing. I was converting some SVG files for a weekend project. Pretty routine stuff. Then this line scrolled by in my terminal:

pip3 install cairosvg cairocffi defusedxml --break-system-packages

Three packages. Installed globally. On my Mac. With a flag that literally says "break system packages."

I hit approve because I was in the zone. I'd been letting Claude Code cook for about 20 minutes and the results were impressive. Why would I stop the momentum to Google some Python library?

But then I paused. Who made these packages? Are they safe? Are they even the right ones, or did the AI hallucinate a name that happens to exist on PyPI?

I had no idea. And neither did Claude.

Turns out those packages are legitimate. They're maintained by CourtBouillon, a French company that's been building Python tools for years. But here's the thing: I only know that because I went back and checked after the fact. In the moment, I was one "approve" click away from installing whatever showed up.

If you're using AI coding tools, you probably are too.

What Claude Code Just Installed on My Mac

Let me back up. Claude Code runs commands directly on your computer. Not in a sandbox. Not in some cloud environment. On your actual machine. When it runs pip install, those packages land in your system Python. They persist after the conversation ends. They're just... there now.

The three packages it installed, cairosvg, cairocffi, and defusedxml, are all well-maintained, widely used, and clean. I checked them on Snyk after the fact. No known vulnerabilities. The maintainers (Guillaume Ayoub and Simon Sapin at CourtBouillon) have solid track records.

But Claude didn't check any of that. It recommended these packages because they appeared in its training data. It knew "for SVG conversion, developers use cairosvg" because that pattern showed up millions of times. Whether those packages are still maintained, recently compromised, or even real in 2026? Claude has no idea. It's working from memory.

How Package Managers Actually Work (The 30-Second Version)

If you've never thought about where packages come from, here's the short version.

Package registries like PyPI have no review process compared to app stores

When your AI assistant runs pip install something or npm install something, it's pulling code from a public registry. PyPI for Python. npm for JavaScript. Think of them like app stores for code.

Except they're nothing like app stores.

When you download an app from Apple's App Store, it goes through a review process. Someone checked it for malware, verified the developer's identity, gave it a stamp of approval. You can argue about how thorough that process is, but it exists.

PyPI has no review process. Anyone can create an account and upload a package. Right now. It takes about five minutes. There are over 600,000 packages on PyPI, and until 2023, there wasn't a single dedicated security engineer on staff. They hired their first one that year. One person. For 600,000 packages downloaded billions of times per month.

npm is similar. Over two million packages, minimal vetting.

So when your AI coding assistant installs a package, it's pulling from a giant repository where the barrier to entry is basically "have an email address." If you've ever worried about what a compromised npm package could do to your website, the AI layer makes it worse.

What Happened to LiteLLM (And Why It Matters to You)

This isn't hypothetical. Let me tell you about something that happened last month.

Compromised packages like LiteLLM sneak malicious code through PyPI

LiteLLM is a popular Python package that lets developers work with different AI models through a single interface. It gets 3.4 million downloads per day. 95 million per month. It's everywhere in AI development.

On March 24, 2026, someone compromised LiteLLM on PyPI. Versions 1.82.7 and 1.82.8 looked identical to the real thing but did a few extra things in the background. They stole SSH keys. Grabbed cloud service tokens. Looked for cryptocurrency wallets. And planted a backdoor using systemd, meaning the malware would survive reboots and keep running even after you thought you'd cleaned up.

The compromised versions were live for roughly three hours before getting caught.

Three hours. That's it. But during those three hours, every single pip install litellm pulled the malicious version. Every automated deployment pipeline. Every developer setting up a new project. Every AI coding assistant that decided litellm was what you needed.

If an AI tool installed litellm during that window and you approved it? Your machine was compromised. No warning. No red flag. Just a normal-looking installation that happened to include a backdoor.

The attack was attributed to a group called TeamPCP. They didn't hack PyPI itself. They compromised the package's CI/CD pipeline, a build tool called Trivy. That was enough to push poisoned versions to millions of users.

The Package That Doesn't Exist (Until an Attacker Makes It)

Here's where AI code assistant supply chain security gets really interesting. And by interesting, I mean terrifying.

AI hallucinating package names that attackers then register as real packages

You know how AI models sometimes make things up? They hallucinate facts, invent citations, fabricate quotes. Turns out they also hallucinate package names.

Researchers call it slopsquatting. The term was coined by Seth Larson, the Python Software Foundation's developer in residence. A study presented at USENIX Security 2025 analyzed 2.23 million package recommendations generated by AI coding assistants across 16 different models. Out of those, 440,445 were hallucinated. Packages that don't exist. Names the AI invented because they sounded right.

That's nearly 20% of all recommendations pointing to thin air.

But here's the dangerous part: 43% of those hallucinated names came up repeatedly. The same fake package name, suggested over and over by different AI models in different sessions. That's a pattern. And attackers noticed.

The attack is simple. Watch what package names AI models hallucinate. Register those names on PyPI or npm. Fill them with malicious code. Wait for the next person whose AI assistant recommends your package.

It works. A package called huggingface-cli (a hallucinated variation of the real huggingface_hub) racked up 30,000 downloads. Alibaba even copied the install command into their public README before anyone caught on. Another fake package called react-codeshift spread to 237 GitHub repositories via AI-generated code.

500 malicious packages appeared in a single PyPI typosquatting campaign. 128 phantom packages accumulated 121,539 downloads between July 2025 and January 2026. Attackers can deploy 280 malicious packages in a single weekend using automation.

I've written before about the security blind spots in vibe-coded applications. Slopsquatting makes those blind spots even larger.

Your AI Assistant Installs From Memory, Not From a Verified Source

Let's be clear about what's actually happening when your AI coding tool suggests a package.

It's not checking PyPI in real time. It's not scanning the package for vulnerabilities. It's not verifying who maintains it or when it was last updated. It's not even checking if the package exists.

It's recommending packages from its training data. That's it.

The AI learned that "for SVG conversion, people use cairosvg" because that pattern appeared in its training set. It doesn't know if cairosvg is still maintained, if it was recently compromised, or if the version on PyPI right now is the same one it learned about months ago.

Endor Labs found that only 1 in 5 AI dependency recommendations are both safe and real. Four out of five are either non-existent, outdated, or otherwise problematic.

Veracode's research tested over 100 AI models and found that 45% of AI-generated code contains security flaws. Not just package issues. The code itself.

Claude is great at building software. It's also great at introducing problems you won't notice until much later. When it says pip install some-package, it sounds like it knows exactly what it's doing. But it's working from memory, not from verified, real-time information.

Sound familiar? It should. It's the same confidence that makes these tools so useful, and so dangerous to trust blindly.

What --break-system-packages Actually Means

Remember that flag from my opening story? --break-system-packages? Let me explain what that actually does, because it's more alarming than it sounds.

Modern operating systems (including macOS and most Linux distributions) have started refusing to let you install Python packages globally using pip. This is a deliberate safety measure called PEP 668. The idea is simple: your operating system depends on specific Python packages to function. If you install something globally that conflicts with those system packages, you can break things. Sometimes important things.

So the system says no. It tells you to use a virtual environment instead. A virtual environment is basically an isolated sandbox where your packages can't interfere with anything else on your machine.

The --break-system-packages flag overrides that protection. It tells pip: "I know this is risky, do it anyway."

Your AI coding assistant used that flag because the normal installation was blocked. The safety rail was in the way, so the AI found the workaround. Not maliciously. Not even carelessly, from the AI's perspective. It had a task to complete, the default approach didn't work, so it escalated.

But think about what that means. The operating system explicitly said "don't do this, it's not safe." And the AI's response was to find the override flag.

That should make you uncomfortable.

What You Can Actually Do About This

I'm not going to tell you to stop using AI coding tools. I use them every day to run my agency. They're genuinely useful. But you need to understand what you're approving when you click "yes" on a terminal command.

Reading and reviewing AI coding assistant commands before approving them

Here's what I actually do now.

Read before you approve. When your AI assistant wants to run a command, especially one that installs something, take five seconds to read it. I know it breaks the flow. Do it anyway.

Ask your AI what it's installing. Before approving, ask: "What is this package? Who maintains it? What does it do?" The AI might not give you a perfectly accurate answer (remember, it's working from training data), but it forces a moment of reflection.

Use virtual environments. If you're doing anything with Python, ask your AI to set up a venv first. This way, even if something bad gets installed, it's contained. It can't touch your system packages or other projects.

Watch for --break-system-packages. If you see this flag, stop. Ask why. There's almost always a better approach.

Check new packages yourself. It takes 30 seconds. Search the package name on PyPI or npm. Look at the maintainer. Check when it was last updated. Check the download count. A package with 12 downloads created yesterday is very different from one with millions of downloads maintained for years.

Be extra cautious with packages you've never heard of. If the AI suggests a package name that doesn't sound familiar and you can't find much about it online, that's a red flag. It might be hallucinated. It might be a slopsquatting target.

The point isn't paranoia. The point is informed consent. When you approve an installation, you should have at least a basic understanding of what you're putting on your machine.

AI code assistant supply chain security isn't a problem that's going away. If anything, it's getting worse as more people use these tools and as attackers get better at exploiting the gap between what AI recommends and what's actually safe. The registries are working on better security measures. AI companies are working on better safeguards. But right now, today, the last line of defense is you reading the terminal before you hit enter.

I run a one-person agency. I don't have a security team reviewing my dependencies. I don't have an IT department monitoring my installations. It's just me and my terminal and whatever my AI assistant decides I need. If that sounds like your situation too, the least we can do is pay attention to what we're approving.

If you want to talk about keeping your web projects secure, or if you're wondering what your AI tools have been installing without you fully understanding it, let's chat.

About the Author

KE

Kemal Esensoy

Kemal Esensoy, founder of Wunderlandmedia, started his journey as a freelance web developer and designer. He conducted web design courses with over 3,000 students. Today, he leads an award-winning full-stack agency specializing in web development, SEO, and digital marketing.

AI Code Assistant Supply Chain Security | Wunderlandmedia