Cleaning Up AI Slop With AI

October 23, 2025 · Ryan X. Charles

Picture this: you’re reviewing a pull request and notice something wrong. The page loads fine during testing, but the database query pattern is alarming: O(N²) count queries running on every single page load. A count query is being performed for each record. For an MVP with a few hundred records, it works. For production with thousands or millions of records, it will grind to a halt.

Someone on the team shipped this under a tight deadline, probably with help from an AI agent. The code works. It passes the type checker. But it’s a ticking time bomb.

This is AI slop.

The Real Cost of AI Slop

AI slop isn’t just ugly code. It’s technical debt that compounds in ways that aren’t obvious until it’s too late. Performance problems emerge at scale. O(N²) queries work fine at first. Then they don’t. Security vulnerabilities hide in complexity, with validation logic buried in nested conditionals that nobody reviews carefully. Six months later, you’re staring at a variable called processedOutput and nobody on the team remembers what it actually contains. Changes in one place break three others because the architecture is brittle and tightly coupled. Even AI agents struggle with messy codebases, so the slop becomes harder to fix over time.

The worst part? It’s easy to create under pressure. Tight deadline, simple prompt: “implement this feature now.” The AI delivers something that works, you ship it, and the debt accumulates silently in your codebase.

Fighting Fire With Fire

Here’s the counter-intuitive part: you can use AI agents to clean up AI slop.

CLI AI tools like Claude Code, Cursor CLI, Copilot CLI, OpenAI Codex CLI, Google Gemini CLI, OpenCode, and others are powerful enough now to not just write code, but to analyze, refactor, and systematically improve existing codebases.

The same tool that created the problem can fix it. But the difference isn’t the tool. It’s how you wield it.

Two Modes of AI Usage

Mode 1: Slop Production

You’re under a deadline. You type a rushed prompt: “add user dashboard with post counts.” You don’t review the approach the AI takes. You don’t think through the implications. As soon as the tests pass, you ship it. The result? O(N²) queries hit your database on every page load. Technical debt enters your codebase, silent and growing.

Mode 2: Slop Removal

You have time and focus. You write a detailed prompt: “Analyze the dashboard queries for performance issues. Identify N+1 patterns and propose caching strategies.” You review each suggestion the AI makes. You iterate on the plan together. You validate changes with profiling. The result? Cached values loaded instantly, updated by a background job that runs on schedule instead of blocking page loads.

The tool is identical. The difference is attention and intention.

The Cleanup Process

Here’s what can work for cleaning up an O(N²) query problem like this.

Audit the Damage

You start by having the AI agent scan the entire codebase for similar patterns. It finds the obvious offenders: queries inside loops that should never be there. It also finds the subtle ones: count queries that could be aggregated, but instead run individually. The agent generates a report ranked by severity, with the worst performance bottlenecks at the top.

Understand the Context

For each issue, you ask the AI to explain why this pattern exists, what the original developer was trying to accomplish, and what constraints they were working under. This matters. The developer who wrote this wasn’t incompetent. They were solving a real problem under time pressure with the tools they knew. Understanding the intent helps you fix the code without breaking the feature.

Design Alternatives

For the page load query problem, the AI proposes several solutions. You could cache the counts in a separate table. You could update cached values via a background job instead of calculating them live. You could add database indexes for the most common queries. You could use pagination to limit result sets.

You discuss trade-offs. Caching adds complexity but solves the performance problem. Indexes help but don’t eliminate O(N²) behavior. Pagination changes the UX in ways users might notice. Together, you might choose caching with background updates: more complexity, but the right complexity in the right place.

Implement Incrementally

You don’t refactor everything at once. One module at a time. First, add the cache table. Then write the background job. Update one page to use cached values. Run tests. Profile the performance. Only after confirming improvement, move to the next page.

At each step, you review the AI’s code. When it suggests clever abstractions that would be hard to understand later, you push back. When it misses edge cases, you catch them. The AI is fast, but it’s not careful by default. You make it careful by reviewing its work.

Validate Continuously

After each change, the AI runs the type checker, linter, unit tests, integration tests, and performance profiling. You review the output every time. When tests fail, you fix them together. When performance doesn’t improve enough, you iterate. The validation isn’t automatic in the sense that it catches everything. It’s automatic in the sense that it runs consistently. You still have to think about what the results mean.

What AI Can Catch (That You Might Miss)

The interesting part about using AI for cleanup isn’t just speed. It’s coverage. An AI agent can find pattern inconsistencies you would miss: some count queries cached, others not, with no clear system or documentation explaining why. It finds duplicated logic: the same counting logic appearing in five different files with slight variations, meaning bugs fixed in one place stay broken in four others. It finds missing error handling: what happens when the background job fails? The code might not say. And it finds documentation drift: comments claiming queries are “optimized” while the implementation remains naively unoptimized.

Could you find these manually? Sure. But it would take days of careful reading. The AI does it in minutes, and you spend your time reviewing and deciding rather than searching.

The Discipline Required

Let me be honest: this approach isn’t faster than producing slop. It’s slower.

Cleaning up O(N²) queries might take three days. The original feature was built in two. If you rush the AI agent like the original developer did, you’ll just add different slop. Maybe premature optimization. Maybe over-engineered abstractions. Maybe caching bugs that only show up under load.

This only works when you give it your full attention. No multitasking, no “just ship it” mentality. You have to review every change the AI makes, because it will make mistakes. You have to stop the agent when it veers off track, which it will do. You have to actually run the tests and understand what they’re testing, not just watch them pass. And you need slack in the schedule. You can’t do this under a deadline.

When This Makes Sense

The answer is simple: whenever you have the time.

Sometimes you’re on a tight deadline and AI slop is the only logical solution given the constraints. You need to ship. The feature works. It’s not perfect, but perfect is the enemy of done, and done is what the business needs right now. That’s fine. Ship it.

But understand: you’re going to need time and resources later to fix it. AI slop doesn’t disappear. It compounds. The performance problems get worse as the database grows. The security vulnerabilities wait for the wrong person to find them. The maintenance burden increases with every new feature that touches the sloppy code.

When you have that time, use it. Use AI to help make the cleanup process faster. The same tool that created the slop under pressure can remove it when you have the attention and discipline to wield it properly. The question isn’t whether AI slop is justified under deadlines. Sometimes it is. The question is whether you’ll allocate time to clean it up afterward.

AI as Amplifier

AI doesn’t inherently create slop or quality. It amplifies your process.

Rushed process plus AI equals slop, faster. Disciplined process plus AI equals quality, faster.

The choice isn’t whether to use AI. We’re past that point. AI agents are already in our codebases, for better or worse. The choice is whether to use them responsibly.

Those O(N²) queries? With discipline, they can be fixed. The dashboard can load instantly. The background job can update counts every hour. The code can be tested, documented, and ready to scale.

Using AI to clean up slop works. But what makes the difference isn’t the tool. It’s the time, attention, and discipline you bring to wielding it.

AI slop is a choice. So is AI quality.


Earlier Blog Posts


Back to Blog

Copyright © 2025 Ryan X. Charles
Home · Blog · CV · Code · Social · Contact