Automatic for the people
Rand, Pound, and second-hand prose
The person you should want at your elbow to tell you why it matters whether a piece of writing is LLM-written is Ayn Rand. Whatever your views on her politics, she was the most precise writer of the twentieth century and chose every word deliberately. She cared more about precision than anyone else except Ezra “use absolutely no word that does not contribute to the presentation” Pound.1 She does not paraphrase or reach for synonyms. She rarely says things are “like” other things. Her insistence on clarity is so famous that her followers produced the “Ayn Rand Lexicon,” a compendium of all the terms in her philosophy defined, fixed, and defended across forty years of writing, including her famous rule, A is A, a thing is the thing itself, not another thing.
Today’s “how to tell if something is AI written” lesson is about synonyms, which are about similarity, not equivalence. Large language models store words as coordinates in a numerical space, positioned by the company they kept in the training data. The distance between similar words is a number, not a distinction. When the model needs to refer to the same idea again, its training penalizes repeating the word it already used, so it reaches for a neighbor. The result looks like a writer who considered an idea from multiple angles. You start to feel comfortable. You should not.
Rand had great disdain for two particular concepts. The first is the “second-hander,” a person whose mind is directed at other minds rather than at reality. We all know second-handers, people who repeat things they’ve heard on podcasts as if they thought of them themselves. Second-handers are also people who lean too heavily on LLMs to generate essays, book reviews, and long social media posts.
LLMs are by definition second-handers as “minds” that can only consult other minds, for which consensus is the primary available signal of truth. LLMs are trained on human writing. Human evaluators also train models, which are adjusted to produce more of what the evaluators preferred. The result is an LLM “mind” regenerating the words of other minds, further shaped by other minds’ preferences about its output.
“Automatic” is a second concept Rand disdained, though narrowly. She understood that humans physically perceive their environment automatically through sense organs, and that conscious thinking over time cools perceptions into a sediment (a context window) of automatic knowledge. You can recognize a chair as a chair “automatically” without re-deriving it. What she disdained was automaticity that had never been thinking in the first place. You should never believe something automatically, without thinking. If you believe something without thinking you will likely speak those thoughts, like a parrot. You will be a second-hander.
And so what about LLMs, which generate prose, automatically? There is a kind of labor involved in prompting, of course, and I expect debates about this. But what is the Randian view of prose that is generated, received, and believed without labor?
First, Rand would not disdain the automatic process by which default LLM generation works. She might think it ingenious, machines that operate on next-token prediction that select the statistically most probable word given everything that precedes it. She would understand that the most probable word is, by definition, the most received, the most expected, the one that required the least resistance to produce. Every word an LLM generates without specific instruction to do otherwise is the word that the largest number of prior writers would have written in that position. She would recognize an LLM as a mechanical second-hander, automatic by the people.
While Rand would likely not disdain the machine (as the product of engineering genius) she would undoubtedly disdain the human second-handers who use LLMs to receive ideas and generate more, without labor. Automatic for the people. Rand understood that it’s easy to receive ideas from others without thought. Her whole project is urging people to think for themselves, to stop being second-handers and look at the world directly. A world in which people who could think for themselves are using a second-hander machine to express their second-hand selves at scale is the epistemological nightmare Rand spent her career warning against.
And so Rand would likely conclude that the important thing now is to explain, painstakingly, how an LLM is a literal second-hander with no direct access to the world and yet could be helpful, certainly as an agent. She would likely have no problem with using LLMs as researchers, as Megan McArdle and others do. She might like that I refer to my pro Claude model as Eddie Willers to remind myself that however good-hearted and helpful he is, however crucial he was to Dagny and to Rand’s plotting, he will be stranded in the desert when the electricity goes out.
She would note that LLM-generated prose is a building designed by Peter Keating, with “an imposing façade, a majestic entrance and a regal drawing room, with which to astound their guests.” The great tragedy of human nature, Rand understood, is that people will like Keating buildings, “encrusted with borrowed ornament,” just as people like LLM-generated essays. People like borrowed embellishments that look like labor.
And so her advice would likely be to explain how these second-hander machines work. If you must use them, use them to think for yourself. I am irritated by imprecise LLM writing that does not respect the reader because I’ve read so much Rand. My recent pieces on AI writing (“How to tell if something is AI-written,” “Metannoying,” “For the love of God learn to paragraph”) all have their basis in Randian thought. Today’s piece, on synonyms, is explicit about its debt to Rand. And to R.E.M.
LLMs and synonyms
Good readers want a long essay or think piece to represent a writer’s thinking, not automatic thinking. Anonymous, second-hand prose is fine for weather reports and children’s toy assembly instructions. But thousands of LLM-generated counterfeit thought pieces appear on the internet every day, promising a new blending of ideas. New combinations are what LLMs do best and they can be diverting. But say you want to be sure that you are reading text by a person, not a blender. What do you do?
Synonym clusters are a good tell for LLM- and second-hand writing. Synonyms make a piece of writing fuller, not better. If you’ve picked the right word, go ahead and repeat that word. You don’t need another word plus an intensifier. When you find four or five words being rotated for a single concept within a short stretch of text without any discussion of why one word is better or different than another, you are likely looking at LLM-generated prose.
Synonym rotation is the visible symptom of an epistemological failure, the absence of concept formation. Think of this piece as both a “how to” and a philosophical account of why synonym clusters matter beyond bad style. Synonym rotation is the fingerprint of a writer who is avoiding something or hasn’t done any thinking at all.
Here are some common LLM clusters:
Freedom / liberty / rights / autonomy / independence Leadership / management / administration / governance / oversight Community / neighborhood / district / area / locale Community / tribe / network / circle / people Culture / climate / environment / ethos / values Healthy / clean / whole / natural / organic Goals / priorities / values / mission / purpose Growth / progress / development / improvement / momentum
You may also come across dead metaphors borrowed from physical experience and applied to abstractions until they mean nothing specific at all:
Navigate / grapple with / wrestle with / contend with / engage with Landscape / terrain / environment / ecosystem / space
These metaphors function like synonyms in LLM prose because they are interchangeable in the same way: borrowed, second-hand.
LLMs rotate synonyms automatically because their training process punishes repetition, lowering the probability that once a word is used the model will select that word again. The model has learned from millions of edited texts that good English prose (which includes ornate Peter Keating prose) varies its vocabulary. So, when the model is prompted to generate the same concept a second or third time, it generates a synonym.
A good human writer who repeats a word in an essay knows the word is the right word and knows that a synonym would be imprecise. A good human writer who varies vocabulary may be doing so because the different word captures a different shade of meaning. Perhaps the point of the essay is the variations of the word. Or perhaps the writer may be doing it because high school teachers urged variation and thesaurus use. Pick the right word and stick to it. A good human writer has a choice about synonym use. An LLM does not.
Synonym rotation is not a surface-level stylistic tic like an em-dash. It is evidence that no concept has been formed. If you want to tell if something is LLM-written based on synonyms alone, you might give the piece to your pro model LLM with the following prompt:
Find every concept in this text that is referred to by more than three words or phrases. For each concept, list every word and phrase used for it.
You may be surprised at what comes back to you, whether a piece is “revealed” as LLM-produced or not.
Note metaphors used as verbs (e.g. land, earn, unpack, anchor, frame, illuminate, underscore) and note every adverb that adds emphasis without evidence (e.g. genuinely, truly, deeply, fundamentally, crucially). They cluster like synonyms.
Finally, highlight every phrase that tells the reader a point is important instead of letting the point speak for itself.
“It is worth noting that” “It is important to emphasize that” “It cannot be overstated that” “What is particularly striking is” “This is especially significant because” “What makes this truly remarkable is”
LLMs add these phrases automatically at the border between subtopics. LLMs are trained to “know” that essays don’t stay on a subtopic forever. After enough sentences, the probability distribution tips and the next tokens are more likely to belong to subtopic B than to another sentence about subtopic A. The training data is full of human writers who placed signals at exactly these junctures (“however,” “moreover,” “turning to,” “what is especially important”). The model produces them for the same reason it produces everything: they are the most probable tokens at that position. Once you understand why LLMs do this you will always see it.
What synonym rotation reveals
To give you an example of what focusing on synonyms reveals, running the prompt above showed me how I rotate the terms ‘college,’ ‘university,’ ‘institution,’ ‘campus’ regularly without making distinctions. I rotate because I am trying to be institutionally inclusive when writing about higher education. I know what the concept of a university is even if I haven’t been to most of them. But ‘university’ is a very loose concept, capturing both Harvard and the local community college. The broader term ‘higher education’ is still a cluster of approximations united by a generally accepted understanding about colleges and universities.
The point is synonym rotation always reveals looseness, even if there’s nothing you can do about it and the looseness is part of your point.
Like good paragraphing, stabilizing concepts when possible is good writing practice in the age of LLMs because LLMs can’t do concept formation, according to Rand’s definition:
The process of concept-formation does not consist merely of grasping a few simple abstractions, such as “chair,” “table,” “hot,” “cold,” and of learning to speak. It consists of a method of using one’s consciousness, best designated by the term “conceptualizing.” It is not a passive state of registering random impressions. It is an actively sustained process of identifying one’s impressions in conceptual terms, of integrating every event and every observation into a conceptual context, of grasping relationships, differences, similarities in one’s perceptual material and of abstracting them into new concepts, of drawing inferences, of making deductions, of reaching conclusions, of asking new questions and discovering new answers and expanding one’s knowledge into an ever-growing sum.2
The process, she concludes, is “thinking.” A mind that has not formed the concept (or a machine that cannot form concepts, because it has no access to the referents that concepts denote) rotates among approximations, because for such a mind all words in the vicinity of a meaning are interchangeable. They are not always interchangeable. A small college is not a research university; many institutions don’t have a campus.
Rand has a second category worth mentioning here: “anti-concept,” such as a term like “polarization.” Anti-concepts are synonym rotation in reverse. A synonym cluster uses six words for one concept. An anti-concept uses one word to package six distinct concepts together. Anti-concepts are designed to obscure rather than clarify. “Extremism” is another anti-concept. LLMs generate synonyms and anti-concepts automatically, and second-handers love both, because both give the appearance of having said something.
Why do people prefer the LLM’s version to their own?
People like LLM-generated prose because it offers easy abundance. A synonym cluster looks like a writer with a large vocabulary. Precision looks plain by comparison. Automatic abundance resembles the product of labor more than the labor itself does.
Consider the studies that show that people prefer LLM poetry that isn’t “great.”3 What I suspect people “like” is accessible verbal abundance arranged in stanzas.
I asked my two pro LLMs if they could write a line like: “Me, my thoughts are flower strewn / ocean storm, bayberry moon.” Yes they can. As one LLM put it: “noun-phrase stacking is a known algorithmic behavior when a model is prompted to produce poetry.” As the other put it: “random image clusters are cheap to produce. ‘My thoughts are petal-drifted, salt wind, amber tide.’ That sentence took me no effort.”
Synonym rotation and noun-stacking are the same failure seen from two sides: one is the failure to form a concept precise enough to know which word belongs, and the other is the failure to form a concept precise enough to know which words do not.
The labor of slimming down, on the other hand, reducing to essentials, staying silent, is less easy for LLMs, as is deliberately choosing the wrong word or strange grammar. LLMs, which are trained to have perfect grammar, seek to “smooth” and “fix.”
An LLM is unlikely to generate a line like “When your day is night alone,” which works, despite grammatical instability. “Day is night” is a contradiction and “alone” is hanging at the end. The reader is unsure what “alone” is modifying. I checked this with two LLMs, which agreed that they are “heavily biased toward standard grammatical structures and would naturally attempt to resolve that awkward, dangling modifier into something conventional, perhaps outputting a phrase like ‘when your days and nights are lonely.’ The machine instinct is to repair the grammar, which destroys the specific poetic tension of the original line.”
An LLM is even less likely to generate “The apparition of these faces in the crowd: / Petals on a wet, black bough.” Pound is using the colon as an equals sign, without explanation. “Wet” and “black” are not the most probable adjectives for “bough” in the training data. “Gnarled” is more probable. “Ancient” is more probable. Pound cut the poem from thirty lines to two over eighteen months. Anyone who has cut a poem down knows that exclusion is a different order of labor from composition. You have to understand a concept precisely enough to recognize what does not belong. An LLM could not perform that labor unless endlessly prompted. An LLM produces inclusion, the most probable next token, added to what came before.
Like a synonym cluster, noun-stacking signals a writer with a vivid imagination. The writer who picks one term and sticks with it looks like a writer with a small vocabulary. People like a poem that piles up images more than a poem filled with silence and odd grammar.
In Rand’s novel, people like Peter Keating’s buildings because they look like architecture. They have columns and moldings and ornamentation. A Roark building looks plain by comparison, even though Roark’s plainness is the result of a formed concept while Keating copies what other architects did. Rand made Keating sad about this. He knew he was a second-hander.
I suspect that most people who publish LLM-generated essays know they are being second-handers at some level. They had a thought, small or unfinished, hard to articulate. The LLM offered a version that was fluent, abundant, and ornamented with synonym clusters that made it sound considered. Why not? It is human to prefer the appearance of thinking to the labor of thinking. Everybody hurts.
None of this is anti-LLM, of course. As Rand understood:
We inherit the products of the thought of other men. We inherit the wheel. We make a cart. The cart becomes an automobile. The automobile becomes an airplane. But all through the process what we receive from others is only the end product of their thinking. The moving force is the creative faculty which takes this product as material, uses it and originates the next step. This creative faculty cannot be given or received, shared or borrowed. It belongs to single, individual men. That which it creates is the property of the creator. Men learn from one another. But all learning is only the exchange of material. No man can give another the capacity to think. Yet that capacity is our only means of survival.”4
I have great appreciation for the creators of the LLM technology I use every day. I admire a product was built that I find useful and will pay for. I recognize that most people in the world are not creators but second-handers and that this new technology allows people to be productive second-handers. It’s all good; as long as there have been humans there have been knock-offs. But because I’m a Randian, because I know how language works, and because I am asked regularly, I will help readers learn how to spot the knock-offs.
Ezra Pound, “Language” (1913):
Use no superfluous word, no adjective which does not reveal something.
Don’t use such an expression as “dim lands of peace.” It dulls the image. It mixes an abstraction with the concrete. It comes from the writer’s not realizing that the natural object is always the adequate symbol.
Go in fear of abstractions. Do not retell in mediocre verse what has already been done in good prose. Don’t think any intelligent person is going to be deceived when you try to shirk all the difficulties of the unspeakably difficult art of good prose by chopping your composition into line lengths.
Be influenced by as many great artists as you can, but have the decency either to acknowledge the debt outright, or to try to conceal it.
Use either no ornament or good ornament.
Ayn Rand, "The Objectivist Ethics," The Virtue of Selfishness, p. 20.
I ran an informal (unpublished) LLM poetry experiment in 2020 with some LLM researchers and poets. I learned that it is impossible to distinguish LLM-generated poetry from mediocre human poetry. The experiment involved 15 “human” poems and 15 GPT-3-generated poems, which I was sure I’d be able to finger. I was not. The LLM researcher had selected human poems from the back pages of mid-tier poetry publications where you’ll find mediocre verse with an occasional spark of something. I have published in such places. They’re a stepping stone to the top journals.
GPT-3 could hold its own against these poems in 2020 and I have kept a close eye on research on the newer models and LLM-poets like Gwern.
Ayn Rand, "The Soul of an Individualist," For the New Intellectual, p. 79.



1) I’d put Ernest Hemingway as the most precise 20th century English-language author, but perhaps that’s just personal preference.
2) I found this statement, and the premise motivating the essay, as hyperbolic: “Synonym rotation is the fingerprint of a writer who is avoiding something or hasn’t done any thinking at all.” It sounds like you’re saying synonym rotation is really the sign of an inferior mind. But as an academic who has written a lot about rights, I can’t imagine how dull and redundant my writing would be if every time I used only the word “rights” to describe my subject, instead of paying homage to philosophical works that have unpacked this topic, characterizing them as “incidents” or “entitlements.” In short, I can’t really tell if your antipathy towards synonym rotation is genuine or a deliberately incendiary take intended to invite rebuttals like this one.
Dear Hollis, Reading this was thrilling for me. I read "The Fountainhead"in 1959 (at age 19) and have been involved with Objectivism all these years. I knew Rand, I was her graphic designer and I co-sponsored her lecture in Chicago in 1963. You have a better understanding of her thinking than many of the people who are "official Objectivists," who are making their living presenting her ideas to the public. I'll be sending this to about 70 fans of Rand's I know. We'll see what response I get.