The Slop Problem
TBAThis is a chapter from the book manuscript Writing in the Age of AI. If you are interested in publishing, an overview, table of contents, and additional sample chapters are available upon request.
The public conversation about AI and writing has polarized into two camps, and both are wrong in instructive ways.
Camp One says AI can only produce garbage. It has no soul, no experience, no genuine creativity. Human writers have nothing to fear because machines will never produce real art. This camp comforts writers but makes them complacent. It also ignores the market reality: most published writing was already closer to garbage than to art, and AI will produce that middle tier faster and cheaper.
Camp Two says AI will produce everything, eventually if not now. Human writing is inefficient, biased, slow. The future is prompt-driven content at scale. This camp excites investors but misunderstands what writing is for. It treats writing as a production problem — how to generate more text faster — rather than a thinking problem.
The honest answer is uncomfortable, and it lives between the two camps. Most human writing — most of what gets published, shared, read — is closer to formulaic than to inspired. The bell curve is real. Most novels are competent. Most journalism is predictable. Most business writing is filler. The percentage of human writing that carries genuine insight — that reframes, that surprises, that could only have come from this mind — is small. But it is not zero. What separates that small remainder from everything else — and whether AI can cross that divide — is the question both camps have been too invested to ask carefully. Writers have started calling the failure mode “slop,” and the term is worth understanding precisely.
What Slop Is

Slop is not bad writing in the traditional sense. It is not incoherent, poorly structured, or grammatically deficient. It is, in its way, technically accomplished. What it lacks is a specific mind. It is generated writing: the statistical center of all similar content the model has encountered, fluent and hollow in equal measure.
You have read slop. You may not have known the word for it, but you recognized the feeling — the uncanny smoothness of a LinkedIn post that says nothing in four paragraphs, the travel article that hits every expected beat without once surprising you, the blog post that reads like a composite of every other blog post on the topic. Before AI, we called this kind of writing “generic” or “forgettable.” Now we call it slop, because AI has given us a production mechanism that reveals what “generic” actually means: it means statistically average. It means the center of the distribution. It means the text that a probability engine would produce if you gave it a topic and asked it to write.
The anatomy of slop reveals exactly where the process failed. When a writer prompts a model to “write an article about this and that,” they have provided a topic but no seed — no genuine nucleation point around which ideas can crystallize. The model cannot invent one. So it produces the expected path: the argument everyone already knows, supported by the examples everyone already reaches for, in the register most commonly associated with this kind of writing. Competent. Forgettable.
Insight is the antidote — but insight is worth defining precisely, because it is often confused with mere opinion. Opinion is a position you hold. Insight is a reframing — the moment when a familiar thing reveals an unfamiliar face. It arrives with a small shock of recognition, the sense that something is now true that you could not have said before. It tends to be irreducible: paraphrase it and you lose it. And crucially, it is personal — not in the sense of being subjective, but in the sense of being earned, the product of this specific attention, from this specific experience, at this specific moment.
This irreducibility is precisely why insight is the antidote to slop. Slop occupies the center of the distribution; insight lives at the edge.
Why AI Produces Slop

AI produces slop for a specific, technical reason that is worth understanding because it illuminates the boundary between what AI can and cannot do.
A language model generates text by predicting the most probable next token given everything that came before. “Most probable” means: the word or phrase that appears most frequently in similar contexts across the training data. This is, by definition, a regression to the mean. The model’s default output is the statistical center of all the writing it has seen on a given topic — the average argument, the expected examples, the most common register.
This is not a bug. It is the architecture. The model is designed to produce probable continuations. When those continuations are factual summaries, code completions, or translations, the statistical center is exactly what you want — accuracy, consistency, the expected answer. But when the task is creative writing, the statistical center is precisely what you don’t want. Creativity, by definition, lives at the edges of the distribution. An insight that could have been predicted from the training data isn’t an insight. It’s a recombination of existing patterns — which is another way of saying it’s slop.
This is why “prompt engineering” for creative writing is largely a dead end. You can steer the model away from its default center — ask for unusual perspectives, demand surprising connections, specify that the output should be non-obvious. But you are still asking a probability engine to produce improbability. The model can move away from the center, but it cannot move toward a specific edge because it doesn’t have an edge to move toward. Only a human mind — one that has experienced, observed, and connected in a way no other mind has — can supply the specific direction that makes writing genuinely original.
We Were Here First

The argument that AI produces slop implies a contrast — that humans, left to their own devices, produce something better. But this flatters us.
The mechanization of writing is not a recent AI phenomenon. It is a process humans initiated and systematized long before language models existed. Starting with the Iowa Writers’ Workshop in 1936, writing was transformed from a mysterious act of inspiration into an analytical, teachable discipline. This was, in its way, an extraordinary achievement — it democratized craft, made the techniques of good writing transmissible. But it also created something else: a template industry. Syd Field’s “Three-Act Structure.” Christopher Vogler’s “Hero’s Journey.” Robert McKee’s “Story.” Blake Snyder’s “Save the Cat!” These frameworks reverse-engineer successful narratives to establish optimal formulas for dramatic tension. They are, in effect, the human version of backpropagation — the core mechanism used to train neural networks. Programmers adjust inputs to guide neural networks toward pre-established solutions. Writing teachers adjust students to guide them toward pre-established narrative structures. The method is the same. The substrate differs.
The result is predictable. Thousands of students graduate with creative writing degrees each year. They do not produce thousands of great novels. They produce thousands of competently structured novels that follow the templates, hit the beats, satisfy the formulas — and lack the vital spark that would make a reader stop and think. The MFA novel is the original slop. And it doesn’t stop there: genre fiction — detective stories, spy thrillers, romances — runs on recognizable, market-tested formulas; the bestseller lists are populated by books that execute templates well. These are the shelves AI will fill first — not because it has learned to tell stories, but because genre fiction was already running on instructions. Much of what we call “human writing” was already algorithmic before algorithms got involved. AI didn’t create the problem. AI inherited a writing culture that had already optimized for template compliance over genuine thought — and is now perfecting what humans were already doing at scale.
This is not an argument against craft. Craft is necessary. But craft alone — craft without the seed of genuine insight — produces exactly the same hollow competence that AI does. The Iowa Workshop and GPTs are solving the same optimization problem: given a set of patterns that readers recognize as “good writing,” produce text that matches those patterns. One does it through pedagogy; the other does it through token prediction. The output is surprisingly similar, because the underlying process is surprisingly similar.
What Writing Is For

The slop problem looks different depending on what you believe writing is for, and this is where the two camps from the opening reveal their deepest disagreement.
If writing is communication — the transmission of information from one mind to another — then AI wins. It communicates more, faster, at lower cost, with fewer errors. The market for writing-as-communication will be dominated by AI, and this is neither tragic nor avoidable.
If writing is craft — the skilled arrangement of words into pleasing, effective structures — then AI competes. It already produces craft-level prose that passes most reader tests. The MFA graduate and the language model are solving the same optimization problem, and the model has more data.
If writing is thinking — the process by which the writer discovers what they didn’t know they knew — then AI is a tool, not a competitor. The value is in the transformation that happens in the writer’s mind during the act of writing. The output is evidence of the thinking, not a substitute for it. AI can accelerate, scaffold, and pressure-test that thinking, but it cannot do the thinking, because the thinking is what happens to the human in the process.
The spectrum also reveals where voice lives. Voice is a property of sentences — the rhythm of a clause, the unexpected word choice, the silence between lines. A poet works at the word level because the words are the thinking — the sound of “the apparition of these faces in the crowd” cannot be delegated without destroying the work. A novelist constructing a 300-page architecture can afford to work at a higher level of abstraction, descending to the sentence level when voice demands it. An essayist building an argument operates somewhere between: the ideas must cohere at altitude, but the prose must carry a mind’s signature at the ground. AI lets you choose your altitude. The craft is knowing when to climb and when to land. The danger isn’t losing your voice to the machine. It’s never developing one in the first place.
The three audiences of this book land differently on this spectrum. The curious generalist needs to understand that the spectrum exists. The writer fearing displacement needs to know which kind of writing is actually threatened — communication and craft — and which is not: thinking. The technologist building writing tools needs to understand that optimizing for output quality misses the point. The value of writing is in the process, and tools should serve the process, not replace it.
Photography made this kind of separation legible a century before AI. Before the camera, a commissioned portrait served two purposes that had never needed to be distinguished. It documented what someone looked like — and it expressed how a painter saw them. Both lived in the same object. The aristocrat who sat for a portrait needed both, and there was no cheaper alternative for either: if you wanted your likeness preserved, you paid for the painting, and the painter’s vision came along whether you valued it or not.
When the camera arrived, it absorbed the first function entirely. What had required hours of sitting and a trained hand now took an instant. Not everyone could afford to commission a portrait — photography put the documentary function within reach of anyone who could hold a camera. The separation that followed was swift and decisive: communicative image-making moved to photography; painting was left with everything the camera couldn’t do.
What the camera couldn’t do turned out to be exactly what painting was for. Stripped of its documentary utility, painting had to reckon with why it still existed. The answer was the one that had always been there, obscured by usefulness: the expression of a particular way of seeing. Not the reproduction of appearances — the camera did that better — but the revelation of a vision no mechanism could replicate. The portrait as record went to photography. The portrait as seeing stayed with painting, and stayed with it more clearly than before.
AI will make the same separation in writing. Text that is purely communicative — the summary, the report, the brief, the product description — will go to AI the way the documentary portrait went to the camera: faster, cheaper, requiring no specific mind. The market for writing-as-communication will not contract; it will expand. The Jevons paradox holds: making a resource more efficient increases, not decreases, its consumption. The total volume of text in the world will multiply beyond recognition. Most of it will be consumed and forgotten as quickly as a photograph on a phone screen. What remains — what endures in the way that painting endured — will be the writing that carries a specific mind’s way of seeing. Not writing-as-information. Writing-as-thinking.
The Lived Experience Argument

The most venerable defense of human creativity against AI rests on lived experience. The argument has a distinguished lineage.
In 1949, Sir Geoffrey Jefferson argued that a machine could only equal the human brain if it produced art born from authentic, felt thoughts and emotions — not a randomized, mechanical combination of symbols. Turing himself acknowledged that computers could not experience human realities like falling in love or enjoying the taste of strawberries and cream. In 1842, Ada Lovelace observed that Babbage’s Analytical Engine had “no pretension to originate anything” and could only perform tasks it was explicitly ordered to do. More recently, Richard Beard argues that true literature requires the kind of lived experience that AI, trapped in data feedback loops, cannot replicate. Marcel Duchamp described art as a “missing link” — a telepathic bridge that transfers delicate emotions across minds — and AI, having no inner life, cannot participate in that transmission.
The argument is correct as far as it goes. Lived experience is necessary for great writing. A writer working without an experiential substrate produces hollow text — recombinations of patterns the writer never moved through, prose that knows the shape of meaning but not its weight. On this point, the lived-experience camp is right, and the right move is to grant it plainly.
But necessity is not sufficiency, and the slip between the two is where the defense folds. Plenty of humans with extraordinary experiences produce terrible writing. Suffering does not automatically become literature. War does not automatically become poetry. Most soldiers did not write Catch-22; most widows did not write The Year of Magical Thinking. Experience is the raw material; the writing process is the alchemy that converts the one into the other. The lived-experience camp often skips this middle step entirely, treating experience as if it transmits directly into art. It does not. It has to be worked.
This is where the writing-is-thinking framework has more traction than the experience argument alone. “Writing is thinking” does not depend on whether the thinker has a body. It depends on whether the process of writing generates genuine cognitive transformation — whether the writer ends up somewhere they could not have predicted when they started. That is a harder bar for AI to clear than “has felt emotions,” and a more useful criterion because it points to something testable: did the process of creation change the creator? Lived experience, when it produces great writing, does so by feeding into a process that performs that transformation. Without the process, experience is mute.
The Antidote

The common response to the slop problem — just don’t use AI for writing — misdiagnoses the cause. Slop is not a model failure. It is an insight failure. The model generates probable continuations from whatever input it receives. Give it a topic and a word count, and it will produce topic-shaped text. The model cannot be blamed for the absence of a seed any more than soil can be blamed for failing to grow what was never planted. The correct response is not abstention. It is the discipline of writing only from genuine moments of personal insight, offering those to the model as nucleation points, and using AI to develop what was already alive in your thinking.
John McPhee, after fifty years at The New Yorker, named the principle plainly: “Writing is selection. Just to start a piece of writing you have to choose one word and only one from more than a million in the language.” His criterion for what stays in is almost startling in its simplicity: “If something interests you, it goes in — if not, it stays out. Forget market research. Never market-research your writing.” Slop is what you get when something other than the writer’s interest does the selecting — when a model trained on the average of all interest selects what would interest the average reader, and the result is fluent, plausible, and emptied of a specific mind. Orwell named the same failure from inside his own work: “It is invariably where I lacked a political purpose that I wrote lifeless books and was betrayed into purple passages, sentences without meaning, decorative adjectives and humbug generally.” Slop is a name for that betrayal. The discipline of insight is not a hedge against the machine. It is the discipline of writing.
And here is the turn that the slop discourse misses entirely: the same tool that produces slop can be used to prevent it. The problem is not the tool. It is the direction of use.
When you bring a genuine insight into the writing process — a surprising connection you noticed in your reading, a tension you felt but couldn’t yet articulate, an observation that contradicted your expectations — you give the model something it cannot manufacture: a nucleation point. Its work shifts from generation to amplification, a fundamentally different and more productive operation. The model that produces slop when given a topic produces something alive when given an insight. The input determines the output. The seed determines the plant.
This applies at every phase. At ignition: if your starting point has no friction — no claim that wants to be argued with — AI will produce a smooth surface with nothing underneath. During composition: if you let AI sustain the momentum without your own thinking generating the heat, the prose will read as fluent and empty. At revision: if you hand AI a draft and ask it to “improve” without specifying what the piece is trying to do, it will polish slop into shinier slop. The antidote is the same at every stage: know what your insight is. If you can’t say it in your own words, the work isn’t ready for a collaborator.
The Frontier

The infinite monkey theorem is one of the oldest thought experiments in probability. Give a monkey a typewriter and infinite time, and it will eventually produce the complete works of Shakespeare — not because it understands what it’s typing, but because random processes, given enough attempts, will produce any finite sequence of characters. Most people find this conclusion technically correct and practically absurd. The time required would exceed the age of the universe by many orders of magnitude. The scenario functions, in practice, as an impossibility.
Now ask the same question with AI in the picture. A language model is not a monkey. It doesn’t type randomly. It has read more text than any human could read in a thousand lifetimes, and it generates output shaped by all of it. It can produce millions of texts a day. If you previously thought Shakespeare-by-random-process was impossible, do you think AI changes your answer?
The question turns on an assumption worth naming: that a text which reads like Shakespeare is Shakespeare. Examine that from two directions.
The first is biographical. Georges Perec’s 1969 novel A Void was written entirely without the letter “e.” An AI could replicate this constraint instantly — lipogrammatic generation is a straightforward technical challenge. But the constraint was not the point. The letter “e” is phonetically linked to the French word “eux” — “them.” Perec’s parents were Polish Jews who perished during the Holocaust. The absent letter enacts the absent people. The formal constraint is a monument to a specific grief. Every sentence is simultaneously a feat of construction and an act of mourning. A Void without the biographical context is a clever lipogram. With it, it’s one of the most moving novels of the twentieth century. The text is the same in both cases. The meaning is not.
The second direction doesn’t require a theory of meaning at all. Suppose AI can replicate Van Gogh. Suppose it generates, at scale, paintings indistinguishable from his in style, technique, and emotional register. The question is not whether these are “real” Van Goghs. The question is: so what?
Van Gogh already exists. What made him Van Gogh was not the surface — the thick impasto, the swirling skies, the saturated color — but the fact that no one had seen like that before. He was standing at a frontier that hadn’t been crossed. Generating Van Gogh now means reproducing something that already happened. The copy is proof that the frontier has been closed, not that a new one has opened.
The same logic applies to writing. AI can produce prose in the manner of Hemingway, Woolf, Carver — and will do so with increasing fidelity. But the next great writer is not someone who writes like any of them. The next great writer is someone who articulates something about now — this specific moment, this friction, this thing the world has not yet found words for — in a way that makes readers feel: yes, that’s exactly it, I couldn’t have named it but now that you have I can’t unsee it. That requires presence at the actual frontier. Not the frontier in the training data. The one that hasn’t been crossed yet.
This is what AI structurally cannot do. Its data ends in the past. It can reproduce every frontier that has already been crossed. The one it cannot reach is the frontier of the present — because that frontier requires a specific life being lived right now, in a moment no dataset contains.
The right question was never whether AI can produce great work. It is whether AI can be the next Shakespeare — not someone who writes like Shakespeare, but someone who, like Shakespeare, arrives at a form nobody had before and reorganizes how everyone after thinks. That is what greatness has always been: not the mastery of a prior form, but the origination of a new one. AI can close frontiers. It cannot open them.