AI Slop: why we're sick of AI-generated content
The internet is filling up with mass-produced garbage. And we all know it, even if nobody wants to say it out loud.
Open LinkedIn. Scroll past three posts. Count how many start with “In a world where…” or end with “What do you think? 👇”. Count how many have that perfect hook + three points + CTA structure that reeks of ChatGPT template from miles away.
Now open Google and search for anything tech-related. Count how many results are 2,000-word articles that say absolutely nothing, padded with unnecessary synonyms and sections that repeat the same idea with different words.
Welcome to the era of AI Slop.
What AI Slop is
“Slop” means swill, pig feed, low-quality slurry. AI Slop is exactly that: low-quality content mass-produced by artificial intelligence, designed to game algorithms, not to provide value to humans.
I’m not talking about using AI as a tool. I use Claude for writing, coding, thinking. AI as an assistant is fantastic.
I’m talking about content generated without supervision, without judgment, without soul. The “50 ways to use ChatGPT” article written by ChatGPT. The LinkedIn post about “authentic leadership” that has nothing authentic about it. The entire blog of a “marketing agency” that’s clearly one prompt executed a thousand times with different keywords.
In my analysis of AI trends for 2026, I mentioned that Merriam-Webster chose “slop” as word of the year 2025. Not a coincidence.
Why it’s happening
The logic is simple and perverse.
Google’s, LinkedIn’s, Twitter’s algorithms reward volume and consistency. Publishing every day, even garbage, ranks better than publishing something good once a month.
AI allows producing content at near-zero cost. What used to require hiring writers now gets done with a prompt and a button.
The combination is explosive: incentives to produce a lot + ability to produce a lot + zero cost = flood of crap.
And the worst part: short-term, it works. I’ve seen LinkedIn accounts go from 0 to 50,000 followers publishing AI Slop daily. I’ve seen websites rank in Google with hundreds of auto-generated articles.
Why it’s a problem
Short-term, AI Slop works. Medium-term, it destroys everything.
Destroys trust. Every time you read an empty article, your brain learns to distrust. Eventually, you stop clicking. Stop reading. Stop trusting online content in general.
Destroys discovery. If 90% of Google results are generated garbage, finding something genuinely useful becomes impossible. Signal gets lost in noise.
Destroys real creators. If someone who writes carefully competes with a thousand auto-generated articles, how do they win? Good content sinks under the weight of volume.
Self-reinforcing. LLMs train on internet content. If the internet fills with AI Slop, future models train on garbage and produce more garbage. It’s a degradation cycle.
How to detect it
With practice, AI Slop can be smelled from miles away. Some signs:
Suspicious perfection. Impeccable grammar, perfect structure, but zero personality. Humans make mistakes, have verbal tics, have weird opinions. Content that’s too polished is suspicious.
Systematic vagueness. Many words, few concrete ideas. “It’s important to consider multiple factors” instead of telling you what those factors are and why they matter.
Absence of experience. Talks about topics in the abstract, never from personal experience. “Companies should…” instead of “At my company we did X and Y happened.”
That feeling. After reading, you don’t remember anything specific. There was no “huh, interesting” or “I disagree” moment. Just textual white noise.
What we can do
As consumers:
- Stop rewarding volume with attention
- Seek sources with their own voice
- Share content that actually adds value, not what the algorithm pushes
- Be more critical about what we read
As creators:
- Don’t compete on volume. It’s a race to the bottom
- Invest in your own voice. What AI can’t copy is your unique perspective
- Be transparent about using AI as a tool (not as a substitute)
- Prioritize depth over quantity
As an industry:
- Search engines need to penalize AI Slop (Google is trying, with mixed results)
- Social platforms need to change the incentives
- We need better ways to verify authorship and originality
My position
I use AI to write. This article went through Claude. I’m not hiding it.
But there’s a difference between using AI as a tool and letting AI be the author. The difference is human judgment, real experience, personal opinion.
When I read something I wrote with AI help, I ask myself: would I sign this if I’d written it by hand? Is there something here that only I could say? Or is it generic content anyone could generate?
If the answer is anyone could generate it… why publish it?
The future
I’m not optimistic short-term. AI Slop will get worse before it gets better. Tools are increasingly accessible, incentives remain perverse, and I don’t see real will to change.
Long-term, I think there’ll be a bifurcation. On one side, oceans of generic content nobody reads (or only bots read). On the other, niches of genuine content where quality and authenticity are rewarded.
The question is where you want to be. Producing slop that gets lost in the noise, or creating something worth reading.
I know my answer.
How do you distinguish AI Slop from genuine content? Think it can be solved? I’m interested in your take.
You might also like
The AI bubble: 7 trillion looking for returns
Who wins, who loses, and why you should care. Analysis of massive AI investment and its bubble signals.
Using AI cost $1000. Now it costs $1. What's your excuse?
AI costs dropped 1000x in two years. If you're not using it, it's not about money. It's fear, ignorance, or laziness.
95% see no results with AI (and why that's normal)
The J-curve of adoption nobody tells you about. Why productivity drops before it rises when you adopt AI.